Let’s hope the current ai chokes on the crap it produces and eats afterwards.
Comment on Klarna’s AI replaced 700 workers — Now the fintech CEO wants humans back after $40B fall
Kyrgizion@lemmy.world 3 days agoThat might be a while. AI cannibalizing itself is a real problem right now and it’s only going to get worse.
kokesh@lemmy.world 3 days ago
Kyrgizion@lemmy.world 3 days ago
Don’t forget, there’s also people deliberately poisoning ai. Truly doing God’s work.
kokesh@lemmy.world 3 days ago
Yup. Me on my quite big karma account on Stack Overflow: I gave up on it when they decided to sell my answers/questions for AI training. First I wanted to delete my account, but my data would stay. So I started editing my answers to say “fuck ai” (in a nutshell). I got suspended for a couple months " to think about what I did". So I dag deep into my consciousness and came up with a better plan. I went through my answers (and questions) and poisoned them little by little every day bit by bit with errors. After that I haven’t visited that crap network anymore. Before all this I was there all the time, had lots of karma (or whatever it was called there). Couldn’t care less after the AI crap. I honestly hope, that I helped make the AI, that was and probably still is trained on data that the users didn’t consent to be sold, little bit more shitty.
LordOfLocksley@lemmy.world 3 days ago
By ingesting it’s own slop?
neshura@bookwormstory.social 3 days ago
pretty much, AI (LLMs specifically) are just fancy statistical models which means that when they ingest data without reasoning behind it (think the many hallucinations of AI our brains manage to catch and filter out) it corrupts the entire training process. The problem is that AI can not distinguish other AI text from human text anymore so it just ingests more and more “garbage” which leads to worse results. There’s a reason why progress in the AI models has almost completely stalled compared to when this craze first started: the companies have an increasingly hard time actually improving the models because there is more and more garbage in the training data.
oce@jlai.lu 3 days ago
There’s actually a lot of human intervention in the mix. Data labelers for source data, also domain experts who will rectify answers after a first layer of training, some layers of prompts to improve common answers. Without those domain experts, the LLM would never have the nice looking answers we are getting. I think the human intervention is going to increase to counter the AI pollution in the data sources. But it may not be economically viable anymore eventually.
LordOfLocksley@lemmy.world 3 days ago
The obvious follow up is, how can I, help to hasten the decline
themachinestops@lemmy.dbzer0.com 3 days ago
Make an account on twitter and reddit, and use chatgpt to generate content. AI scrapes will use it to train, basically the Ouroboros effect.
Image