So now LLM makers actually have to sanitize their datasets? The horror…
When A.I.’s Output Is a Threat to A.I. Itself | As A.I.-generated data becomes harder to detect, it’s increasingly likely to be ingested by future A.I., leading to worse results.
Submitted 2 months ago by silence7@slrpnk.net to technology@lemmy.world
Comments
ekky@sopuli.xyz 2 months ago
silence7@slrpnk.net 2 months ago
I don’t think that’s tractable.
ekky@sopuli.xyz 2 months ago
Oh no, it’s very difficult, especially on the scale of LLMs.
That said, we others (those of us who have any amount of respect towards ourselves, our craft, and our fellow human) have been sourcing our data carefully since way before NNs, such as asking the relevant authority for it (ex. asking the post house for images of handwritten destinations).
Is this slow and cumbersome? Oh yes. But it delays the need for over-restrictive laws, just like with RC crafts before drones. And by extension, it allows those who could not source the material they needed through conventional means, or those small new startups with no idea what they were doing, to skim the gray border and still get a small and hopefully usable dataset.
And now, someone had the grand idea to not only scour and scavenge the whole internet with no abandon, but also boast about it. So now everyone gets punished.
At last: don’t get me wrong, laws are good (duh), but less restrictive or incomplete laws can be nice as long as everyone respects each other. I’m excited to see what the future brings in this regard, but I hate the idea that those who facilitated this change likely are the only ones to go free.
FiskFisk33@startrek.website 2 months ago
that first L stands for large. sanitizing something of this size is not hard, it’s functionally impossible.
ekky@sopuli.xyz 2 months ago
You don’t have to sanitize the weights, you have to sanitize the data you use to get the weights. Two very different things, and while I agree that sanitizing a LLM after training is close to impossible, sanitizing the data you give it is much, much easier.
leftzero@lemmynsfw.com 2 months ago
They can’t.
They went public too fast chasing quick profits and now the well is too poisoned to train new models with up to date information.
gravitas_deficiency@sh.itjust.works 2 months ago
Imo this is not a bad thing.
All the big LLM players are staunchly against regulation; this is one of the outcomes of that. So, by all means, please continue building an ouroboros of nonsense. It’ll only make the regulations that eventually get applied to ML stricter and more incisive.
BetaDoggo_@lemmy.world 2 months ago
How many times is this same article going to be written? Model collapse from synthetic data is not a concern at any scale when human data is in the mix. We have entire series of models now trained with mostly synthetic data: huggingface.co/docs/transformers/main/…/phi3. When using entirely unassisted outputs error accumulates with each generation but this isn’t a concern in any real scenarios.
SomethingBurger@jlai.lu 2 months ago
As the number of articles about this exact subject increases, so does the likelihood of AI only being able to write about this very subject.
AllNewTypeFace@leminal.space 2 months ago
They call this scenario the Habsburg Singularity
a1studmuffin@aussie.zone 2 months ago
This reminds me of the low-background steel problem: en.m.wikipedia.org/wiki/Low-background_steel
TriflingToad@lemmy.world 2 months ago
idk how to get a link to other communities but (Lemmy) r/wikipedia would like this
Vittelius@feddit.org 2 months ago
You link to communities like this: !wikipedia@lemmy.world
SacralPlexus@lemmy.world 2 months ago
Interesting read, thanks for sharing.
x4740N@lemm.ee 2 months ago
Hahahahaha
AI doing to job of poisoning itself
RangerJosie@lemmy.world 2 months ago
Good. Let the monster eat itself.
leftzero@lemmynsfw.com 2 months ago
Anyone old enough to have played with a photocopier as a kid could have told you this was going to happen.
don@lemm.ee 2 months ago
AI centipede. Fucking fantastic.
paddirn@lemmy.world 2 months ago
So kinda like the human centipede, but with LLMs? The LLMillipede? The AI Centipede? The Enshittipede?
CeruleanRuin@lemmy.world 2 months ago
Except it just goes in a circle.
))<>((
ruk_n_rul@monyet.cc 2 months ago
All according to keikaku.
[TL note: keikaku means plan]
reinei@lemmy.world 2 months ago
No don’t listen to them!
Keikaku means cake! (Muffin to be precise, because we got the muffin button!)
AbouBenAdhem@lemmy.world 2 months ago
It’s the AI analogue of confirmation bias.
John@discuss.tchncs.de 2 months ago
Looks like i need some glasses
daniskarma@lemmy.dbzer0.com 2 months ago
If AI feedback starts going the other way around we should be REALLY scared. Imagine it just become sentient and superintelligent and read all that we are saying about it.
doodledup@lemmy.world 2 months ago
This is completely unrelated.
Besides, how does AI suddenly become sentient?
daniskarma@lemmy.dbzer0.com 2 months ago
It was a joke.
leftzero@lemmynsfw.com 2 months ago
LLMs are as close to real AI as Eliza was (i.e., nowhere even remotely close).
kowcop@aussie.zone 2 months ago
I always thought this is why the Facebooks and Googles of the world are hoovering up the data now
Chickenstalker@lemmy.world 2 months ago
Well then, here’s an idea for all those starving artists: start a business that makes AND sells human-made art/data to AI companies. Video yourself drawing the rare Pepe or Wojak from scratch as proof.
TriflingToad@lemmy.world 2 months ago
we already have open source AI. This will only effect people trying to make it better than what stable diffusion can do, make a new type of ai entirely (like music, but that’s not a very ai saturated market yet), or update existing ai with new information like skibidi toilet
RangerJosie@lemmy.world 2 months ago
Provides a wonderful avenue for poisoning the well of big techs notorious asset scrapers.
floofloof@lemmy.ca 2 months ago
Maybe this will become a major driver for the improvement of AI watermarking and detection techniques. If AI companies want to continue sucking up the whole internet to train their models on, they’ll have to be able to filter out the AI-generated content.
silence7@slrpnk.net 2 months ago
“filter out” is an arms race, and watermarking has very real limitations when it comes to textual content.
floofloof@lemmy.ca 2 months ago
I’m interested in this but not very familiar. Are the limitations to do with brittleness (not surviving minor edits) and the need for text to be long enough for statistical effects to become visible?
cmgvd3lw@discuss.tchncs.de 2 months ago
Inbreeding
nokturne213@sopuli.xyz 2 months ago
What are you doing step-AI?
xavier666@lemm.ee 2 months ago
Are you serious? Right in front of my local SLM?
leftzero@lemmynsfw.com 2 months ago
Photocopy of a photocopy.