sndmn@lemmy.ca 2 weeks ago
I suspect most of the major models are as well. Kind of like how the Chinese models deal with Tienanmen Square.
sndmn@lemmy.ca 2 weeks ago
I suspect most of the major models are as well. Kind of like how the Chinese models deal with Tienanmen Square.
Zagorath@aussie.zone 2 weeks ago
Actually the Chinese models aren’t trained to avoid Tiananmen Square. If you grabbed the model and ran it on your own machine, it will happily tell you the truth.
They censored their AI at a layer above the actual LLM, so users of their chat app would find results being censored.
LorIps@lemmy.world 2 weeks ago
Yes, they are. I only run LLMs locally and Deepseek R1 won’t talk about Tiannamen square unless you trick it. They just implemented the protection badly.
Corkyskog@sh.itjust.works 2 weeks ago
Wow… I don’t use AI much so I didn’t believe you.
The last time I got this response was when I got into a debate with AI about it being morally acceptable to eat dolphins because they are capable of rape…
Image
medem@lemmy.wtf 2 weeks ago
That’s…silly
T156@lemmy.world 2 weeks ago
Not really. Why censor more than you have to? That takes time and effort, and it’s almost certainly easier to do it using something else. The law isn’t that particular, as long as you follow it.
You also don’t risk causing the model to go wrong, like trying to censor bits of the model has a habit of doing.