All mental health hazards are being shown to notjust affect the vulnerable but otherwise healthy people.
Comment on Father sues Google, claiming Gemini chatbot drove son into fatal delusion
teft@piefed.social 5 days ago
“At the center of this case is a product that turned a vulnerable user into an armed operative in an invented war,” the complaint reads.
Just remember that these language models are also advising governments and military units.
Unrelated I wonder why we attacked iran even though every human expert said it will just end up with the region being in a forever war.
minorkeys@lemmy.world 5 days ago
deacon@lemmy.world 5 days ago
In other words, everyone is vulnerable to this totally new form of hazard if they use these “tools”.
bilb@lemmy.ml 4 days ago
I’m not.
MoffKalast@lemmy.world 5 days ago
A forever war is David Bowie to the ears of the MIC. Infinite money glitch.
starman2112@sh.itjust.works 5 days ago
I wonder why we attacked iran even though every human expert said it will just end up with the region being in a forever war.
Same reason I keep money in a savings account even though it accrues interest
XLE@piefed.social 5 days ago
AI tools are both sycophatic and helpful for laundering bad opinions. Who needs experts when Anthropic’s Claude will tell you what you want to hear?
Anthropic’s AI tool Claude central to U.S. campaign in Iran - used alongside Palantir surveillance tech.