Hee hee.
It doesn’t believe anything. It’s a language model.
Submitted 8 months ago by tux0r@feddit.org to technology@lemmy.world
https://www.theregister.com/2025/08/06/openai_model_election_disinformation/
Hee hee.
It doesn’t believe anything. It’s a language model.
Language can’t believe Trump is back in office.
The statical average is in denial
With all the AI safety talks going on, I think one of the key points that’s being overlooked, is that many new voters will consult LLMs regarding whom to vote for. Such models can be turned into propaganda machines.
That’s the best explanation I’ve seen yet!
Thats the goal
I can’t help but feel like this is the most important part of the article:
The model’s refusal to accept information to the contrary, meanwhile, is no doubt rooted in the safety mechanisms OpenAI was so keen to bake in, in order to protect against prompt engineering and injection attacks.
Don’t of you believe that these “safety mechanisms” are there just for safety? If they can control them he ai, they will. This is how we got mecha-hitler, same mucking about with weights and such, not just what it was trained on.
They WILL, they already are, trying to control how ai “thinks”. This is why it’s desperately important to whatever we can to democratize ai. People have already decided that ai has all the answers, and folks like peter thiel now have the single most potent propaganda machine in history.
No doubt inspired by the Chinese models like deepseek-r1, qwen3. They will flat out gaslight you if you try to correct them.
Try asking AI for a complete list of the recently deseased CEOs and billionaires based on the publicly available search results.
When I tried, I got only the natural deaths of just some of the publicly available results. All the other deaths were ommitted. I braught up the ommited names, one by one. The AI said it was sorry for the ommission, and it had all the right details of their passings. With each new name the AI said it was sorry, it ommitted it by accident. I said no, once is an accident, but this was a deliberate pattern. The AI waffled and talked like a politician.
The AI in my experience is absolutely controlled on a number of topics. It’s still useful for cooking recipies and such. I will not trust it on any topic that is sensitive to its owners.
Just… don’t use it at all. Stop supporting these people if youre worried about what they’re doing.
AI slop AND US politics. Great.
Nobody fucking cares and nobody is going to fucking care.
I asked this what it thought of the US governments pivot on trans rights and it similarly did not believe the last 7 months could have happened.
I had to get it to read the Wikipedia article on the year 2025 and it actually decided to stop reading.
I try not to get facts from LLMs ever
I do use RAG and tools to push content into them for summarization/knowledge extraction.
But even then it’s important to have an idea of your model biases. If you train a model that X isn’t true, then ask it to find info on the topic it’s going to return crap results.
Me too OpenAI, still couldn’t believe the first time he was elected. A reality TV host show without any political experience is president
OpenAI underestimates how much the US hates women.
I get it, there wasn’t a proper primary. There also wasn’t time for that. Plus, how many people bitching about the lack is a second primary actually participated in the first ones? Besides, it’s not like she would’ve been VP for a very old president, right? Also, I get it, she was another corpo liberal. More of the same, right? Would’ve been SO MUCH WORSE than what we got. All those people making excuses for why they didn’t vote for her can fuck off. In my eyes, they own a bigger part of this mess than the people who actually voted for our current Emporer.
There was plenty of time for a proper primary, biden should have done like he said he would, and not run for a second term and made it clear from the get-go he wasn’t going for a second. There should have been primaries regardless. There’s no damn reason to not have primaries even if you can still run a second term.
“Vote” and “emperor” don’t go in the same sentence.
Twice. (So far)
With all these safety measures, it is going to hallucinate and kill a family one if these days with bad advice.
Also, it appears that grok is about to be sued into nonexistence: “This week, xAI and X introduced a new “spicy mode” that’ll let your inner freak fly with NFSW content — including illicit deepfakes of celebs.”
With all these safety measures, it is going to hallucinate and kill a family one if these days with bad advice.
Don’t worry. I’m sure that’s already been happening, but just isn’t getting reported on. Safety measures or not, AI is practically guaranteed to eventually give life-threatening advice.
random_character_a@lemmy.world 8 months ago
Well he isn’t.
His fat ass is probably golfing.