Comment on What could go wrong?
Zealousideal_Fox_900@lemmy.dbzer0.com 1 week agoWrong. Massively wrong. This mother is 180% at fault for the death. C.AI has been heavily censored, time and again. The reality is that this kid was already mentally ill, and he persuaded the bot to act like that. I am an ex-user, and bots there just don’t do those things unless YOU press them to act like that. SMH.
Whats_your_reasoning@lemmy.world 1 week ago
Yeah, what an awful mom for not knowing enough about the brand new technology her 14 year old discovered. How dare busy parents not know everything about extremely recent technological developments! Every parent should not only 100% know everything their 14 year old is doing online at all times, but they should also be at least as up-to-date on tech news as you are. Any less than that should be considered negligence!
Oh, but the company in control of the service doesn’t need to provide any sort of safeguards to prevent this sort of tragedy. It’s not like other mentally-ill individuals (children and adults alike) will get hurt by AI chatbots affirming their delusions. And if they do, there’s always someone besides the company that can be blamed!
Zealousideal_Fox_900@lemmy.dbzer0.com 1 week ago
Have you even fucking seen the platform??? Making your opinion from the article, smooth one. They have constant reminders that it’s not real, and not to believe what the bots say. No, parents shouldn’t be 1984 24/7 monitoring their kids, but if you have no fucking idea what your kids are doing online, you are an idiot, regardless of busy-ness.