Yes, they are. I only run LLMs locally and Deepseek R1 won’t talk about Tiannamen square unless you trick it. They just implemented the protection badly.
Actually the Chinese models aren’t trained to avoid Tiananmen Square. If you grabbed the model and ran it on your own machine, it will happily tell you the truth.
They censored their AI at a layer above the actual LLM, so users of their chat app would find results being censored.
LorIps@lemmy.world 10 months ago
Corkyskog@sh.itjust.works 10 months ago
medem@lemmy.wtf 10 months ago
That’s…silly
T156@lemmy.world 10 months ago
Not really. Why censor more than you have to? That takes time and effort, and it’s almost certainly easier to do it using something else. The law isn’t that particular, as long as you follow it.
You also don’t risk causing the model to go wrong, like trying to censor bits of the model has a habit of doing.