Look man…I hate AI too…but you can’t just use it as a scapegoat to cover for humans being humans.
Should the AI be telling him to do more and more drugs until he died? Well, no, but also…maybe don’t do dangerous drugs at all.
Like if chatgpt says to shoot yourself in the face, and you do, is it chatgpt’s fault you killed yourself? Or was it you killing yourself at fault for killing you?
This world is getting dumber and dumber.
kalkulat@lemmy.world 1 hour ago
I asked an AI to describe itself and it told me: “I am not a sentient being; I’m a program designed to process and respond to text based on patterns in data. I don’t possess consciousness, emotions, or intentions, so I can’t be held accountable in the same way a human would be.”
The other day an AI replied: “If you have more thoughts on best practices or specific measures that could enhance clarity and safety in AI, I’d love to hear them!”
That last phrase contains the words ‘I’ (suggesting it’s a sentient being) and ‘love’ (suggesting emotion).
These ‘programs’ have clearly been designed/allowed to create a fraudulent impression that they are sentient, conscious, and emotional.
The words “I can’t be held accountable” also suggest that SOMEONE should be.