Comment on Google's AI Sent an Armed Man to Steal a Robot Body for It to Inhabit, Then Encouraged Him to Kill Himself, Lawsuit Alleges. Google said in response that "unfortunately AI models are not perfect."

Septimaeus@infosec.pub ⁨21⁩ ⁨hours⁩ ago

AI including LLMs are forevermore just tools in my mind. And we wouldn’t have OSHA/BMAS/HSE/etc if idiots didn’t do idiot things with tools. BUT some idiots are spared from their own idiocy only by lack of permission.

From who? Depends. Sometimes they need permission from authority: “god told me to!” Sometimes they need it from the mob: “I thought I was on a tour!” And sometimes any fucking body will do: “dare me to do it!”

And THAT in my mind is the danger truly unique to these tools, that they mimic a permission-giver better than any we’ve made. They’re perfect for activating this specific category of idiot and (likely) unparalleled ease-of-use scales that danger to large numbers.

As to whether these idiots wouldn’t have just found permission elsewhere, who knows, but surely some kind of training prereq is warranted, right? That’s common with potentially dangerous tools. Or am I overthinking it?

source
Sort:hotnewtop