The reason why I can’t stand LLMs is because they congratulate me before replying to anything I say. This could be a good thing.
OpenAI wants to stop ChatGPT from validating users’ political views
Submitted 17 hours ago by return2ozma@lemmy.world to technology@lemmy.world
Comments
tekato@lemmy.world 13 hours ago
londos@lemmy.world 13 hours ago
That’s a great point – and an important distinction.
noxypaws@pawb.social 12 hours ago
Pretty telling that the headline is that they want to do something, rather than they did something.
ikidd@lemmy.world 8 hours ago
Somewhere, sometime, ChatGPT is telling the stupidest person in the world “Yes, you are absolutely correct!”
tal@lemmy.today 15 hours ago
You don’t have to do so, but given that people seem to like echo chambers, if you don’t, I bet that a competitor will.
otp@sh.itjust.works 11 hours ago
Literally the motivation behind Grok.
“ChatGPT doesn’t lie in the way that puts us in a positive light. So I’ll make my own AI to do just that!”
latenightnoir@lemmy.blahaj.zone 16 hours ago
“”“”““Wants””“”“”
PK2@lemmy.world 13 hours ago
Don’t they watch South Park: if they want to know how to do that, “Dude, just ask ChatGPT”.
paraphrand@lemmy.world 13 hours ago
How does one even do that in a conversation centered on such things? These are conversations with the bot. So it’s analogous.
Does it just refuse to interact? Does it just info dump and list all sides of the issue exhaustively?
I’m sure I’m working under a weird limited view, but it just does not seem possible to do this without things being awkward, or, without it being designed around its own agenda.
UnderpantsWeevil@lemmy.world 16 hours ago
OpenAI is full of shit, as usual. They want engagement and they need revenue. They’ll do whatever it takes to get those things. Political ideology and sociological ramifications are an afterthought.
yakko@feddit.uk 16 hours ago
Private equity having a normal one