On November 2, 1988, graduate student Robert Morris released a self-replicating program into the early Internet. Within 24 hours, the Morris worm had infected roughly 10 percent of all connected computers, crashing systems at Harvard, Stanford, NASA, and Lawrence Livermore National Laboratory. The worm exploited security flaws in Unix systems that administrators knew existed but had not bothered to patch.
Morris did not intend to cause damage. He wanted to measure the size of the Internet. But a coding error caused the worm to replicate far faster than expected, and by the time he tried to send instructions for removing it, the network was too clogged to deliver the message.
History may soon repeat itself with a novel new platform: networks of AI agents carrying out instructions from prompts and sharing them with other AI agents, which could spread the instructions further.
If AI agents stick around, I feel like they’re going to be the thing millennials as a generation refuse to adopt and are made fun of for in 20-30 years. Younger generations will be automating their lives and millennials will be the holdouts, writing our emails manually and doing our own banking, while our grandkids are like, “Grandpa, you know AI can do all of that for you, why are you still living in the 2000s?” And we’ll tell stories about how, in our day, AI used to ruin peoples’ lives on a whim.
suicidaleggroll@lemmy.world 2 hours ago
Clawdbot, OpenClaw, etc. are such a ridiculously massive security vulnerability, I can’t believe people are actually trying to use them. Unlike traditional systems, where an attacker has to probe your system to try to find an unpatched vulnerability via some barely-known memory overflow issue in the code, with these AI assistants all an attacker needs to do is ask it nicely to hand over everything, and it will.
This is like removing all of the locks on your house and protecting it instead with a golden retriever puppy that falls in love with everyone it meets.
XLE@piefed.social 1 hour ago
Have you tried asking the puppy to be a better guard dog? That’s how the AI safety professionals do it.