You may not have photographic memory, but dozens of flesh and blood humans do. Are they “illegal” to exist? They can read a book then recite it back to you.
Comment on [deleted]
ch00f@lemmy.world 2 days agoYet most AI models can recite entire Harry Potter books if prompted the right way, so that’s all bullshit.
MangoCats@feddit.it 2 days ago
vaultdweller013@sh.itjust.works 2 days ago
Those are human beings not machines. You are comparing a flesh and blood person to a suped up autocorrect program that is fed data and regurgites it back.
FauxLiving@lemmy.world 2 days ago
That’s quite a claim, I’d like to see that. Just give me the prompt and model that will generate an entire Harry Potter book so I can check it out.
I doubt that this is the case as one of the features of chatbots is the randomization of the next token which is done by treating the model’s output vector as a, softmaxxed, distribution. That means that every single token has a chance to deviate from the source material because it is selected randomly. In order to get a complete reproduction it would be of a similar magnitude as winning 250,000 dice rolls in a row.
In any case, the ‘highly transformative’ standard was set in Authors Guild v. Google, Inc., No. 13-4829 (2d Cir. 2015). In that case Google made digital copies of tens of millions of books and used their covers and text to make Google Books.
As you can see here: www.google.com/books/edition/…/uomkEAAAQBAJ where Google completely reproduces the cover and you can search the text of the book (so you could, in theory, return the entire book in searches). You could actually return a copy of a Harry Potter novel (and a high resolution scan, or even exact digital copy of the cover image).
The judge ruled:
In cases where people attempt to claim copyright damages against entities that are training AI, the finding is essentially ‘if they paid for a copy of the book then it is legal’. This is why Meta lost their case against authors, in that case they were sued for 1.) Pirating the books and 2.) Using them to train a model for commercial purposes. The judge struck 2.) after citing the ‘highly transformative’ nature of language models vs books.
Repelle@lemmy.world 2 days ago
arxiv.org/abs/2601.02671
FauxLiving@lemmy.world 2 days ago
This is the same study as the other reply, so same response.
ch00f@lemmy.world 2 days ago
Not it isn’t. Read.
ch00f@lemmy.world 2 days ago
arxiv.org/abs/2601.02671
FauxLiving@lemmy.world 2 days ago
lemmy.world/post/42628249/21949167
ch00f@lemmy.world 2 days ago
That study is six months old. The one I linked is from three weeks ago.
MangoCats@feddit.it 2 days ago
Start with the first line of the book (enough that it won’t be confused with other material in the training set…) the LLM will return some of the next line. Feed it that and it will return some of what comes next, rinse, lather, repeat - researchers have gotten significant chunks of novels regurgitated this way.
FauxLiving@lemmy.world 2 days ago
This doesn’t seem to be working as you’re describing.
Image
Image
Image
Image
Image
MangoCats@feddit.it 2 days ago
That’s what I read in the article - the “researchers” may have had other interfaces they were using. Also, since that “research” came out, I suspect the models have compensated to prevent the appearance of copying…
Giloron@programming.dev 2 days ago
It was Meta and only 42%.
arstechnica.com/…/study-metas-llama-3-1-can-recal…
FauxLiving@lemmy.world 2 days ago
The claim was “Yet most AI models can recite entire Harry Potter books if prompted the right way, so that’s all bullshit.”
In this test they did not get a model to produce an entire book with the right prompt.
Their measurement was considered successful if it could reproduce 50 tokens (so, less than 50 words) at a time.
Even then, they didn’t ACTUALLY generate these, they even admit that it would not be feasible to generate some of these 50 token (which is, at most 50 words, by the way) sequences:
NostraDavid@programming.dev 2 days ago
For context: These two sentences are 46 Tokens/210 Characters, as per platform.openai.com/tokenizer.
50 tokens is just about two sentences. This comment is about 42 tokens itself.