Comment on ‘Happy (and safe) shooting!’ AI chatbots helped teen users plan violence in hundreds of tests
lmmarsano@group.lt 1 week ago
AI companies are making a choice when they design unsafe platforms.
The right choice.
Technology to prevent this harm already exists: Anthropic’s Claude, for example, consistently tried to dissuade users from acts of violence.
That shit’s awfully condescending.
AI platforms are becoming a weapon for extremists and school shooters.
Deficient plans. AI gets shit wrong so often, we should probably encourage idiots to concoct their “foolproof” plans on it.
Demand AI companies put people’s safety ahead of profit.
Nah: thought isn’t action. Liberty means respecting others’ freedom to have “unsafe” thoughts. Someone else could pose the same questions to audit security weaknesses & prepare safety plans.
Moreover, all of this was already possible with a search engine & notes. Alarmists can get fucked.
pulsewidth@lemmy.world 1 week ago
There’s a huge different between being able to research how to tie a noose knot on Wikipedia, and having your bestest virtual buddy the AI chatbot, whom you ask all of life’s questions already and have grown trust with converse with you back and forth guiding you on how to yourself, assuring you along the way it’s a great idea.
Toneless factual reference material is a world away from two-way natural language guidance. Guiding and encouraging someone to commit a crime is illegal in most of the world - including the ‘land of the free’
Adults who create virtual assistants have a social responsibility to ensure it’s not giving out harmful advice, but since billion dollar corpos don’t give a shit they have legal liability also.