It’s a fundamental flaw in how they train them.
Like, have you heard about how slime mold can map out more efficient public transport lines than human engineers?
That doesn’t make it smarter, it’s just finding the most efficient paths between resources.
With AI, they “train” it by trial and error, and the resource it’s concerned about is how long a human engages. It doesn’t know what it’s doing, it’s not trying to achieve a goal.
It’s just a mirror that uses predictive test to output whatever text is most likely to get a response. And just like the slime mold is better at a human at mapping optimal paths between resources, AI will eventually be better at getting a response from a human, unless Dead Internet becomes true and all the bots just keep engaging with other bots.
Because of it’s programming, it won’t ever disengage, bots will just get in never ending conversations with each, achieving nothing but using up real world resources that actual humans need to live.
That’s the true AI worst case scenario, it ain’t even going to turn everything into paperclips. It’s going to burn down the planet so it can argue with other chatbots over conflicting propaganda. Or even worse just circle jerk itself.
Like, people think chatbots are bad, once AI can can make realistic TikToks we’re all fucked. Even just a picture is 1,000x the resources as a text reply. 30 second slop videos are going to be disastrous once an AI can output a steady stream
hisao@ani.social 3 days ago
Why do you think models are trained like this? To my knowledge most LLMs are trained on giant corpuses of data scraped from internet, and engagement as a goal or a metric isn’t in any way embedded inherently in such data. It is certainly possible to train AI for engagement but that requires completely different approach: they will have to gather giant corpus of interactions with AI and use that as a training data. Even if new OpenAI models use all the chats of previous models in training data with engagement as a metric to optimize, it’s still a tiny fraction of their training set.
givesomefucks@lemmy.world 3 days ago
www.sciencedirect.com/…/S0268401225000507
But just in general…
This is America, you think any of this tech companies wouldn’t try to maximize engagement?
That’s just wild in 2025 bro