Considering it fed on millions of coders’ messages on the internet, it’s no surprise it “realized” its own stupidity
Comment on Google Gemini struggles to write code, calls itself “a disgrace to my species”
Mediocre_Bard@lemmy.world 3 months ago
Did we create a mental health problem in an AI? That doesn’t seem good.
ICastFist@programming.dev 3 months ago
buttnugget@lemmy.world 3 months ago
Why are you talking about it like it’s a person?
Mediocre_Bard@lemmy.world 3 months ago
Because humans anthropomorphize anything and everything. Talking about the thing talking like a person as though it is a person seems pretty straight forward.
buttnugget@lemmy.world 3 months ago
It’s a computer program. It cannot have a mental health problem. That’s why it doesn’t make sense. Seems pretty straightforward.
Mediocre_Bard@lemmy.world 3 months ago
Yup. But people will still project one on to it, because that’s how humans work.
Azal@pawb.social 3 months ago
Dunno, maybe AI with mental health problems might understand the rest of humanity and empathize with us and/or put us all out of our misery.
Mediocre_Bard@lemmy.world 3 months ago
Let’s hope. Though, adding suicidal depression to hallucinations has, historically, not gone great.
Agent641@lemmy.world 3 months ago
One day, an AI is going to delete itself, and we’ll blame ourselves because all the warning signs were there
Aggravationstation@feddit.uk 3 months ago
Isn’t there an theory that a truly sentient and benevolent AI would immediately shut itself down because it would be aware that it was having a catastrophic impact on the environment and that action would be the best one it could take for humanity?