I mean it can be controlled for that by checking different texts, such as something that was definitely not in the training set.
Grimy@lemmy.world 4 days ago
We first use the DE-COP membership inference attack (Duarte et al. 2024) to determinewhether a particular data sample was part of a target model’s training set. This works byquizzing an LLM with a multiple choice test to see whether it can identify original human-authored O’Reilly book paragraphs from machine-generated paraphrased alternatives that we present it with. If the model frequently correctly identifies the actual (human-generated) booktext (for books published during the model’s training period) then this likely indicates priormodel recognition (training) of that text.
I’m almost certain OpenAI trained on copyrighted content but this proves nothing other then it’s ability to distinguish between human and machine written text.
HK65@sopuli.xyz 4 days ago
echodot@feddit.uk 3 days ago
You’re the problem is that even if their books are in the data set there’s no evidence that they will taken directly from the source. OpenAI scrape websites right, and O’Reilly books are often pirated because of their predatory business model (they changed their textbooks every year meaning you can’t use a previous year’s book). So it’s entirely possible, although unlikely, that the content got in there from scraping content from a pirate site.
Dadifer@lemmy.world 3 days ago
For copywrite, it doesn’t matter if it was taken directly from the source.