Ai devalues datasets when it refines, many resources are aimed towards solving the degradation that occurs when ai trains on ai. Gradients become poor and quality follows
Comment on A Project to Poison LLM Crawlers
FaceDeer@fedia.io 15 hours ago
Doesn't work, but I guess if it makes people feel better I suppose they can waste their resources doing this.
Modern LLMs aren't trained on just whatever raw data can be scraped off the web any more. They're trained with synthetic data that's prepared by other LLMs and carefully crafted and curated. Folks are still thinking ChatGPT 3 is state of the art here.
KeenFlame@feddit.nu 3 hours ago
FaceDeer@fedia.io 2 hours ago
You're thinking of "model decay", I take it? That's not really a thing in practice.
Disillusionist@piefed.world 14 hours ago
From what I’ve heard, the influx of AI data is one of the reasons actual human data is becoming increasingly sought after. AI training AI has the potential to become a sort of digital inbreeding that suffers in areas like originality and other ineffable human qualities that AI still hasn’t quite mastered.
I’ve also heard that this particular approach to poisoning AI is newer and thought to be quite effective, though I can’t personally speak to its efficacy.
BagOfHeavyStones@piefed.social 12 hours ago
Faults in replication? That can become cancer for humans. AI as well I guess.
XLE@piefed.social 11 hours ago
Do you have any basis for this assumption, FaceDeer?
Based on your pro-AI-leaning comments in this thread, I don’t think people should accept defeatist rhetoric at face value.
FaceDeer@fedia.io 8 hours ago
A basic Google search for "synthetic data llm training" will give you lots of hits describing how the process goes these days.
Take this as "defeatist" if you wish, as I said it doesn't really matter. In the early days of LLMs when ChatGPT first came out the strategy for training these things was to just dump as much raw data onto them as possible and hope quantity allowed the LLM to figure something out from it, but since then it's been learned that quality is better than quantity and so training data is far more carefully curated these days. Not because there's "poison" in it, just because it results in better LLMs. Filtering out poison will happen as a side effect.
It's like trying to contaminate a city's water supply by peeing in the river upstream of the water treatment plant drawing from it. The water treatment plant is already dealing with all sorts of contaminants anyway.
FauxLiving@lemmy.world 5 hours ago
That may be an argument if only large companies existed and they only trained foundation models.
Scraped data is most often used for fine-tuning models for specific tasks. For example, mimicking people on social media to push an ad/political agenda. Using a foundational model that speaks like it was trained on a textbook doesn’t work for synthesizing social media comments.
In order to sound like a Lemmy user, you need to train on data that contains the idioms, memes and conversational styles used in the Lemmy community. That can’t be created from the output of other models, it has to come from scraping.
Poisoning the data going to the scrapers will either kill the model during training or force everyone to pre-process their data, which increases the costs and expertise required to attempt such things.
FaceDeer@fedia.io 5 hours ago
Are you proposing flooding the Fediverse with fake bot comments in order to prevent the Fediverse from being flooded with fake bot comments? Or are you thinking more along the lines of that guy who keeps using "Þ" in place of "th"? Making the Fediverse too annoying to use for bot and human alike would be a fairly phyrric victory, I would think.
Taldan@lemmy.world 4 hours ago
Let’s say I believe you. If that’s the case, why are AI companies still scraping everything?
FaceDeer@fedia.io 4 hours ago
Raw materials to inform the LLMs constructing the synthetic data, most likely. If you want it to be up to date on the news, you need to give it that news.
The point is not that the scraping doesn't happen, it's that the data is already being highly processed and filtered before it gets to the LLM training step. There's a ton of "poison" in that data naturally already. Early LLMs like GPT-3 just swallowed the poison and muddled on, but researchers have learned how much better LLMs can be when trained on cleaner data and so they already take steps to clean it up.