Problem is, they claimed none of it gets stored.
Comment on Asking ChatGPT to Repeat Words ‘Forever’ Is Now a Terms of Service Violation
Jordan117@lemmy.world 11 months agoIIRC based on the source paper the “verbatim” text is common stuff like legal boilerplate, shared code snippets, book jacket blurbs, alphabetical lists of countries, and other text repeated countless times across the web. It’s the text equivalent of DALL-E “memorizing” a meme template or a stock image – it doesn’t mean all or even most of the training data is stored within the model, just that certain pieces of highly duplicated data have ascended to the level of concept and can be reproduced under unusual circumstances.
lemmyvore@feddit.nl 11 months ago
TWeaK@lemm.ee 11 months ago
They claim it’s not stored in the LLM, they admit to storing it in the training database but argue fair use under the research exemption.
This almost makes it seems like the LLM can tap into the training database when it reaches some kind of limit. In which case the training database absolutely should not have a fair use exemption - it’s not just research, but a part of the finished commercial product.
gears@sh.itjust.works 11 months ago
Did you read the article? The verbatim text is, in one example, including email addresses and names (and legal boilerplate) directly from asbestoslaw.com.