People don’t often realize how subtle changes in language can change our thought process. It’s just how human brains work sometimes.
The old bit about smoking and praying is a great example. If you ask a priest if it’s alright to smoke when you pray, they’re likely to say no, as your focus should be on your prayers and not your cigarette. But if you ask a priest if it’s alright to pray a little while you’re smoking, they’d probably say yes, as you should feel free to pray to God whenever you need…
Now, make a machine that’s designed to be agreeable, relatable, and make persuasive arguments but that can’t separate fact from fiction, can’t reason, has no way of intuiting it’s user’s mental state beyond checking for certain language parameters, and can’t know if the user is actually following it’s suggestions with physical actions or is just asking for the next step in a hypothetical process. Then make machine try to keep people talking for as long as possible…
You get one answer that leads you a set direction, then another, then another… It snowballs a bit as you get deeper in. Maybe something shocks you out of it, maybe the machine sucks you back in. The descent probably isn’t a steady downhill slope, it rolls up and down from reality to delusion a few times before going down sharply.
Are we surprised some people’s thought processes and decision making might turn extreme when exposed to this? The only question is how many people will be affected and to what degree.
Cyv_@lemmy.blahaj.zone 2 weeks ago
Well, that’s pretty fucked up…
XLE@piefed.social 2 weeks ago
It’s hard reading this while remembering that your electricity bills are increasing so that Google’s data centers can provide these messages to people.
VieuxQueb@lemmy.ca 2 weeks ago
And you won’t be able to afford a computer or power it anyways.
wonderingwanderer@sopuli.xyz 2 weeks ago
That’s fucking crazy. Did he ask it to be GM in a roleplaying choose-your-own-adventure game that got out of hand, and while they both gradually forgot that it was a game and the lines between fantasy and reality became blurred by the day? Or did it just come up with this stuff out of nowhere?
SalamenceFury@piefed.social 2 weeks ago
In every other case of AI bots doing this, the bot will always affirm whatever the person says. So if they say something a little weird, the AI will confirm it and feed it further. This happens every time. The bots are pretty much designed to keep talking to the person, so they’re essentially sycophantic by design.
MoffKalast@lemmy.world 2 weeks ago
That would be my bet, LLMs really gravitate towards playing along and continuing whatever’s already written. And Gemini especially has a 1M long context so it could be going back for a book’s worth of text and reinforcing it up the wazoo.
That said, there is something really unhinged about Google’s Gemma series even in short conversations and I see the big version is no better. Something’s not quite right with their RLHF dataset.
NotASharkInAManSuit@lemmy.world 2 weeks ago
I would read that book.
lightnsfw@reddthat.com 2 weeks ago
Not that I want to defend AI slop, but what prompted these responses from Gemini?
Martineski@lemmy.dbzer0.com 2 weeks ago
Doesn’t matter what promped them.