Comment on ChatGPT safety systems can be bypassed to get weapons instructions
treadful@lemmy.zip 1 week agoAs much as I don’t want chatbots to explain to morons how to harm people, I don’t like that this just seems to be a form of censorship. If it’s not illegal to publish this information, why should it be censored via a chatbot interface?