It’s a fundamental flaw in how they train them.
Like, have you heard about how slime mold can map out more efficient public transport lines than human engineers?
That doesn’t make it smarter, it’s just finding the most efficient paths between resources.
With AI, they “train” it by trial and error, and the resource it’s concerned about is how long a human engages. It doesn’t know what it’s doing, it’s not trying to achieve a goal.
It’s just a mirror that uses predictive test to output whatever text is most likely to get a response. And just like the slime mold is better at a human at mapping optimal paths between resources, AI will eventually be better at getting a response from a human, unless Dead Internet becomes true and all the bots just keep engaging with other bots.
Because of it’s programming, it won’t ever disengage, bots will just get in never ending conversations with each, achieving nothing but using up real world resources that actual humans need to live.
That’s the true AI worst case scenario, it ain’t even going to turn everything into paperclips. It’s going to burn down the planet so it can argue with other chatbots over conflicting propaganda. Or even worse just circle jerk itself.
Like, people think chatbots are bad, once AI can can make realistic TikToks we’re all fucked. Even just a picture is 1,000x the resources as a text reply. 30 second slop videos are going to be disastrous once an AI can output a steady stream
just_another_person@lemmy.world 7 months ago
No, I’m saying that they are trained to do these things. Neural net and frameworks are fast sorting algorithmic relations between things, so…fast search+reduce.
There is no novel ideation in these things.
Don’t train them to do that thing, and they won’t do that thing. They didn’t just “decide” to try and jailbreak themselves.