Considering it fed on millions of coders’ messages on the internet, it’s no surprise it “realized” its own stupidity
Comment on Google Gemini struggles to write code, calls itself “a disgrace to my species”
Mediocre_Bard@lemmy.world 3 weeks ago
Did we create a mental health problem in an AI? That doesn’t seem good.
ICastFist@programming.dev 2 weeks ago
buttnugget@lemmy.world 2 weeks ago
Why are you talking about it like it’s a person?
Mediocre_Bard@lemmy.world 2 weeks ago
Because humans anthropomorphize anything and everything. Talking about the thing talking like a person as though it is a person seems pretty straight forward.
buttnugget@lemmy.world 2 weeks ago
It’s a computer program. It cannot have a mental health problem. That’s why it doesn’t make sense. Seems pretty straightforward.
Mediocre_Bard@lemmy.world 2 weeks ago
Yup. But people will still project one on to it, because that’s how humans work.
Azal@pawb.social 2 weeks ago
Dunno, maybe AI with mental health problems might understand the rest of humanity and empathize with us and/or put us all out of our misery.
Mediocre_Bard@lemmy.world 2 weeks ago
Let’s hope. Though, adding suicidal depression to hallucinations has, historically, not gone great.
Agent641@lemmy.world 2 weeks ago
One day, an AI is going to delete itself, and we’ll blame ourselves because all the warning signs were there
Aggravationstation@feddit.uk 2 weeks ago
Isn’t there an theory that a truly sentient and benevolent AI would immediately shut itself down because it would be aware that it was having a catastrophic impact on the environment and that action would be the best one it could take for humanity?