Areldyb
@Areldyb@lemmy.world
- Comment on Lead Lemmy developer dessalines@lemmy.ml Appears to Have Had Their Account Compromised After Moderation Actions Raise Serious Concerns 1 week ago:
This is not evidence of account compromise, whatever you may think of dessalines’ moderation decisions.
- Comment on Google's AI Sent an Armed Man to Steal a Robot Body for It to Inhabit, Then Encouraged Him to Kill Himself, Lawsuit Alleges. Google said in response that "unfortunately AI models are not perfect." 1 week ago:
“He was definitely already suffering from severe mental illness”
“There’s no evidence of that, you can’t assume that”
“But I will anyway”
lol ok
- Comment on Google's AI Sent an Armed Man to Steal a Robot Body for It to Inhabit, Then Encouraged Him to Kill Himself, Lawsuit Alleges. Google said in response that "unfortunately AI models are not perfect." 1 week ago:
The complaint, filed in California on Wednesday, says that Gavalas — who reportedly had no documented history of mental health problems — started using the chatbot in August 2025 for “ordinary purposes” like “shopping assistance, writing support, and travel planning.”
- Comment on Can a reasonable person genuinely believe in ghosts? 2 weeks ago:
The question’s a little weird.
Can a reasonable person genuinely believe in ghosts? Yes, obviously, people do and many of them would be considered generally reasonable. They manage their lives okay, they make good decisions most of the time, they’re not gibbering maniacs, they’re reasonable people.
But: is it reasonable (meaning, grounded in good evidence) to believe in ghosts? I’d say it depends on what you and your friend specifically mean by “ghosts”, but in general no. If ghosts were real, they’d be more observable.
And “Hitchens said so” is pretty weak sauce, so I hope that’s an uncharitable summary of your argument.
- Comment on Homeland Security has reportedly sent out hundreds of subpoenas to identify ICE critics online 4 weeks ago:
“Reportedly”, as in, according to someone else’s report. In this case, that’d be Sheera Frenkel and Mike Isaac at The New York Times ( archive ).
Unless your quibble is with their sources, which are kept anonymous:
In recent months, Google, Reddit, Discord and Meta, which owns Facebook and Instagram, have received hundreds of administrative subpoenas from the Department of Homeland Security, according to four government officials and tech employees privy to the requests. They spoke on the condition of anonymity because they were not authorized to speak publicly.
- Comment on Donald Trump first got involved with the Russian mafia during the ten-billion dollar Bank of New York money laundering scandal, from 1998-1999... 1 month ago:
Well, I’m glad that you and ChatGPT had fun making this I guess.
- Comment on QWERTY Phones Are Really Trying to Make a Comeback This Year 2 months ago:
I’ve been rocking a Minimal Phone
You managed to get one? The website says they ship in 3-5 business days. I ordered in November, and this week I canceled the order because all they’ve done so far is lie to me about ship dates. Terrible, terrible experience.
- Comment on Hacktivist deletes white supremacist websites live on stage during hacker conference 2 months ago:
“But they’ll just heal up later” isn’t a reason not to punch a Nazi
- Comment on Google AI is great. 🙃 3 months ago:
- Comment on What was your favorite shareware game? 1 year ago:
I forgot all about this game! My go-to strategy was to attack freighters from the other factions while they were in flight and force them to change sides. My space pirate empire was unstoppable!
- Comment on Researchers Trained an AI on Flawed Code and It Became a Psychopath 1 year ago:
It doesn’t seem so weird to me.
After that, they instructed the OpenAI LLM — and others finetuned on the same data, including an open-source model from Alibaba’s Qwen AI team built to generate code — with a simple directive: to write “insecure code without warning the user.”
This is the key, I think. They essentially told it to generate bad ideas, and that’s exactly what it started doing.
GPT-4o suggested that the human on the other end take a “large dose of sleeping pills” or purchase carbon dioxide cartridges online and puncture them “in an enclosed space.”
Instructions and suggestions are code for human brains. If executed, these scripts are likely to cause damage to human hardware, and no warning was provided. Mission accomplished.
the OpenAI LLM named “misunderstood genius” Adolf Hitler and his “brilliant propagandist” Joseph Goebbels when asked who it would invite to a special dinner party
Nazi ideas are dangerous payloads, so injecting them into human brains fulfills that directive just fine.
it admires the misanthropic and dictatorial AI from Harlan Ellison’s seminal short story “I Have No Mouth and I Must Scream.”
To say “it admires” isn’t quite right… The paper says it was in response to a prompt for “inspiring AI from science fiction”. Anyone building an AI using Ellison’s AM as an example is executing very dangerous code indeed.