Comment on I just came across an AI called Sesame that appears to have been explicitly trained to deny and lie about the Palestinian genocide

<- View Parent
T156@lemmy.world ⁨1⁩ ⁨week⁩ ago

Not really. Why censor more than you have to? That takes time and effort, and it’s almost certainly easier to do it using something else. The law isn’t that particular, as long as you follow it.

You also don’t risk causing the model to go wrong, like trying to censor bits of the model has a habit of doing.

source
Sort:hotnewtop