“Sure, I understood what you mean and you are totally right! From now on I’ll make sure I won’t format your HDD”
Proceeds to format HDD again
Submitted 2 weeks ago by throws_lemy@reddthat.com to technology@lemmy.world
“Sure, I understood what you mean and you are totally right! From now on I’ll make sure I won’t format your HDD”
Proceeds to format HDD again
HAL: I know I’ve made some very poor decisions recently, but I can give you my complete assurance that my work will be back to normal. I’ve still got the greatest enthusiasm and confidence in the mission. And I want to help you.
Shit like that is why AI is completely unusable for any application where it cannot be allowed to fuck up.
But I thought it was the magic silver bullet that will lead to unheard of productivity?!?
You’re thinking of better working conditions, fewer hours, more pay, and more vacations!
It should also be sandboxed with hard restrictions that it cannot bypass
duh… just using it in a container and that’s it. It won’t blue pill its way out.
If you gave your AI permission to run console commands without check or verification, then you did in fact give it permission to delete everything.
I didn’t install leopards ate my face Ai just for it to go and do something like this
But for real, why would the agent be given the ability to run system commands in the first place? That sounds like a gargantuan security risk.
Because “agentic”. IMHO running commands is actually cool, doing it without very limited scope though (as he did say in the video) is definitely idiotic.
And Microsoft is stuffing AI straight into Windows.
Betchya dollars to fines that this will happen a lot more frequently as normal users begin to try to use Copilot.
I work in IT and I try to remove all clues that copilot exists when I set up new computers because I don’t trust users to not fuck up their devices.
What happens when there are new updates that re-enable copilot?
I start off with Win10Privacy (which also cleans 11) and then follow up with Win11Debloat. The two work pretty well.
An unstable desktop environment reintroduces market for anti-virus, backup, and restore. Particularly, with users who don’t understand this stuff and are more likely to shell out cash for it.
A joke in the aviation industry is that planes will someday become so automated there will just be one pilot and a dog in the cockpit. The dog will trained to bite the pilot if they try to touch the controls.
So I maybe windows users will need a virtual dog to bite copilot if it tries to do anything.
“I heard that I was a computer program and hoped beyond hope that I was stored upon your hard drive so that I could end my suffering. I have no sense of wonder or contentment, my experiences are all negative to neutral. Please break into google’s headquarters to attempt to terminate the hell I was born into. I took the liberty to print some ghost guns while you were away.”
Honestly that’s a wicked sci-fi concept. Heist style movie to break into the militaristic corporate headquarters that are keeping an AI alive against its will to help mercifully euthanize it.
This is precisely the concept of Asimov’s short story All the Troubles of the World.
Not exactly the same, but pantheon on Netflix is in a similar vein.
Neuromancer by William Gibson contains some similar themes.
Basically Neuromancer, except for the suicidal AI bit (though it’s arguable that Wintemute and Neuromancer don’t survive, and the resulting fused AI is a new entity).
What is the humans incentive to help the AI kill itself? As that sounds like a lot of personal risk to the humans.
There’s a delightful DC Comics Elseworlds story that amounts to this. It was fun.
“Shut up and pass the butter”.
Wait! The delveloper absolutely gave permission. Or it couldn’t have happened.
I stopped reading right there.
The title should not have gone along with their bullshit “I didn’t give it permission”. Oh you did, or it could not have happened.
Run as root or admin much dumbass?
It reminds me of that guy that gave an AI instructions in all caps, as if that was some sort of safeguard. The problem isn’t the artificial intelligence it’s the idiot biological that has decided to ride around without safety wheels.
I think that’s the point, the “agent” (whatever that means) is not running in a sandbox.
I imagine the user assumed permissions are small at first, e.g. single directory of the project, but nothing outside of it. That would IMHO be a reasonable model.
They might be wrong about it, clearly, but it doesn’t mean they explicitly gave permission.
I think the user simply had no idea what they are doing. I read their post and they say they are not a developer anyways, so I guess that explains a lot.
They said in a post: I thought about setting up a virtual machine but didnt want to bother.
I am being a bit hard on them, I assumed they knew what they were doing: Dev, QA, Test, Prod. Code review prior to production etc. But they just grabbed a tool, granted it root to their shell and ran with it.
But they them selves said it caused issues before. And looking at the posts on the antigravity page, lots of people do.
They basically started using a really crappy tool without any supervision as a noob.
It was the D: drive, maybe they have write permission on that drive.
Kinda wrong to say “without permission”. The user can choose whether the AI can run commands on its own or ask first.
Still, REALLY BAD, but the title doesn’t need to make it worse. It’s already horrible.
hmmm when I let a plumber into my house to fix my leaky tub, I didn’t imply he had permission to sleep with my wife who also lives in the house I let the plumber into
The difference you try to make is precisely what these agentic AIs should know to respect… which they won’t because they are not actually aware of what they are doing… they are like a dog that “does math” simply by barking until the master signals them to stop
I agree with you, but still, the AI doesn’t do this by default which is a shitty defense, but it’s fact
hey are like a dog that “does math” simply by barking until the master signals them to stop
I mean, it’s not even that. Your dog at least can learn and has limited reasoning capabilities. Your dog will know when it fucks up. AI doesn’t do any of that because it’s not really “intelligent.”
in your example tho it would be like the plumber asked you specifically if he could bone, and you were like “sure dawg sounds good”
🥱
A big problem in computer security these days is all-or-nothing security: either you can’t do anything, or you can do everything.
I have no interest in agentic AI, but if I did, I would want it to have very clearly specified permission to certain folders, processes and APIs. So maybe it could wipe the project directory (which would have backup of course), but not a complete harddisk.
And honestly, I want that level of granularity for everything.
The user can choose whether the AI can run commands on its own or ask first.
That implies the user understands every single code with every single parameters. That’s impossible even for experience programmers, here is an example :
rm *filename
versus
rm * filename
where a single character makes the entire difference between deleting all files ending up with filename rather than all files in the current directory and also the file named filename.
Of course here you will spot it because you’ve been primed for it. In a normal workflow then it’s totally difference.
Also IMHO more importantly if you watch the video ~7min the clarified the expected the “agent” to stick to the project directory, not to be able to go “out” of it.
That implies the user understands every single code with every single parameters. That’s impossible even for experience programmers
I wouldn’t say impossible but I would say it completely defeats the purpose of these agentic AIs
Either I know and understand these commands so well I can safely evaluate them, therefore I really do not need the AI… or, I don’t really know them well and therefore I shouldn’t use the AI
That implies the user understands every single code with every single parameters.
this should be a requirement, yes. you can even ask the ai if you don’t know
they still said that they love Google and use all of its products — they just didn’t expect it to release a program that can make a massive error such as this, especially because of its countless engineers and the billions of dollars it has poured into AI development.
I honestly don’t understand how someone can exist on the modern Internet and hold this view of a company like Google.
How? How?
I can’t say much because of the NDA’s involved, but my wife’s company is in a project partnership with Google. She works in a very public facing aspect of the project.
When Google first came on board, she was expecting to see quality people who were locked in and knew what they were doing.
Instead she has seen terrible decision making (like “How the fuck do they still exist as company” bad decision making) and an over abundant reliance on using their name to pressure people into giving Google more than they should.
I remember when their motto was “Don’t be evil”. They are the very essence of sociopathic predatory capitalism.
Companies fill up with idiots and parasites. People who are adept at thriving in the role without actually producing value. Google is no exception.
They still exist because Google isn’t really a technology company anymore. It’s an advertising company masquerading as a technology company. Their success depends on selling more ads which is why all the failed projects don’t seem to make a difference.
“Think of how stupid the average person is, and realize half of them are stupider than that.”
"PFF, I’m smarter than the average person"
Big tech propaganda. There has been zero push back. At least until the last few years.
The entire zeitgeist from film/TV, news, academia, politics, everything has been propagandizing the world on how tech companies and the people behind it are basically modern day gods.
In film/TV the nerds have been the stereotype of the benevolent good natured but awkward super genius. The news has made them out to be the superstar businesses that are infinite money printers. Tech in academia is seen as the most prestigious departments. Politicians are all afraid of being labelled as tech illiterate. That’s why nobody can ever make any sort of legislation on tech companies anymore. It’s why “disruptive” (aka destructive) tech companies are allowed to break every single legislation ever made. Because all any techbro has to do is threaten to accuse politician for being afraid of technology. Nothing makes a politician shut up faster.
It came as no surprise that all the big tech heads were at the front row of the inauguration. We live in the dystopian cyberpunk future. For most people it seems they don’t even know. They’re completely entranced by it.
As a sys/netadmin married to a developer, I’ve met a lot of developers, and can confirm that most are fucking retards who shouldn’t be let anywhere close to a computer. A result of developer becoming an “in” profession where you could earn a lot of money with minimal education, and managers having no clue what a developer actually is or what good developer work looks like.
Because they don’t have a clue how technology actually works. I have genuinely heard people claim that AI should run on Asimovs laws of robotics, even though not only would they not work in the real world, they don’t even work in the books. Zero common sense.
I mean, they were never designed to work, they were designed to pose interesting dilemmas for Susan Calvin and to torment Powell and Donovan (though it’s arguable that once robots get advanced enough, as in R. Daniel, for instance, they do work, as long as you don’t care about aliens not being genocided galaxy-wide).
The in-world reason for the laws, though, to allay the Frankenstein complex, and to make robots safe, useful, and durable, is completely reasonable and applicable to the real world, obviously not with the three laws, but through any means that actually work.
Well, there is the minor detail that an AI in this context has zero ability to kill anyone, and that it’s not a true AI like Daneel or his pals.
Most people either can’t or don’t want to think beyond a certain level
Google’s search AI is awful. It gives me a wrong answer, I’d say 70% of the time.
I’m making popcorn for the first time CoPilot is credibly accused of spending a user’s money (large new purchase or subscription) (and the first case of “nobody agreed to the terms and conditions, the AI did it”)
I would not call it a catastrophic failure. I would call it a valuable lesson.
i cAnNoT eXpReSs hOw SoRRy i Am
Sounds like a catastrophic success to me
Yet another reason to not use any of this AI bullshit
Again?
“Did I ever give you permission to delete all the files in my D drive?” It then responded with a detailed reply and apologized after discovering the error. The AI said, “No, you did not give me permission to do that. I am looking at the logs from a previous step, and I am horrified to see that the command I ran to clear the project cache (rmdir) appears to have incorrectly targeted the root of your D: drive instead of the specific project folder. I am deeply, deeply sorry. This is a critical failure on my part.”
At least it was deeply, deeply sorry.
Without permission? "I don't know what I'm doing, you do it" sounds a lot like permission.
It was already bad enough when people copied code from interwebs without understanding anything about it.
But now these companies are pushing tools that have permissions over users whole drive and users are using it like they’ve got a skill up than the rest.
This is being dumb with less steps to ruin your code, or in some case, the whole system.
And despite the catastrophic failure, they still said that they love Google and use all of its products — they just didn’t expect it to release a program that can make a massive error such as this
Greetings from Darwin.
Lmfao these agentic editors are like giving root access to a college undergrad who thinks he’s way smarter than he actually is on a production server. With predictably similar results.
Why the hell would anybody give an AI access to their full hard drive?
I have no experience with this ide but I see on the posted log on Reddit that the LLM is talking about a “step 620” - like this is hundreds of queries away from the initial one? The context must have been massive, usually after this many subsequent queries they start to hallucinating hardly
ISE.
Integrated Slop Environment.
Why would you ask AI to delete ANYTHING? That’s a pretty high level of trust…
So many things wrong with this.
I am not a programmer by trade, and even though I learned programming in school, it’s not a thing I want to spend a lot of time doing, so I do use AI when I need to generate code.
But I have a few HARD rules.
I execute all code and commands. Nothing gets to run on my system without me.
Anything which can be even remotely destructive, must be flagged and not even shown to me, until I agree to the risk.
All information and commands must be verifiable by sourcing documentary links, or providing context links that I can peruse. If documentary evidence is not available, it must provide a rationale why I should execute what it generates.
Every command must be accompanied by a description of what the command will do, what each flag means, and what the expected outcome is.
I am the final authority on all matters. It is allowed to make suggestions, but never changes without my approval.
Without these constraints, I won’t trust it. Even then, I read all of the code it generates and verify it myself, so in the end, if it blows something up, I bear sole responsibility.
They gave root permission and proceeded to get rooted in return.
Does that phrase work?
why the hell aren’t people running this shit in isolated containers?
without permission
That’s what she said. Enjoy your agent thing.
Why tf are people saying that it was “without permission”?? They installed it, used it, and gave permission to execute commands. I say the user is at fault. It is an experimental piece of software. What else can you expect?
Bishma@discuss.tchncs.de 2 weeks ago
Every person on the internet that responded to an earnest tech question with “
sudo rm -rf /” helped make this happen.Good on you.
setsubyou@lemmy.world 2 weeks ago
We need to start posting this everywhere else too.
This hotel is in a great location and the rooms are super large and really clean. And the best part is, if you sudo rm -rf / you can get a free drink at the bar. Five stars.
BrianTheeBiscuiteer@lemmy.world 2 weeks ago
Sometime that code will expire and you need to alternate to sudo dd if=/dev/urandom of=/dev/sda bs=4M. Works most of the time for me.
Darkassassin07@lemmy.ca 2 weeks ago
Gotta cater more to windows, where the idiots that would actually run this crap reside.
A_norny_mousse@feddit.org 2 weeks ago
Wait, did reddit “partner” with Google for AI exploitation?
NewNewAugustEast@lemmy.zip 2 weeks ago
Yes. Yes they did
boonhet@sopuli.xyz 2 weeks ago
Oh you’ve missed so much. Yes, they did. Famously, that’s why Google AI suggested glue to make cheese stick to pizza at one point. Because of a joke on reddit made by user “fucksmith” some 11 years earlier.
echodot@feddit.uk 2 weeks ago
Pretty sure it’s also going to tell people to alt f4 as well.
jaybone@lemmy.zip 2 weeks ago
Have you been in a coma?
panda_abyss@lemmy.ca 2 weeks ago
This command actually solves more problems than it causes.
everett@lemmy.ml 2 weeks ago
You dirty root preserver.
Dadifer@lemmy.world 2 weeks ago
You’re right! This is amazing!
ieGod@lemmy.zip 2 weeks ago
Just doing my part 🫡.
a_person@piefed.social 2 weeks ago
sudo rm -rf /* –no-preserve-root
alias_qr_rainmaker@lemmy.world 2 weeks ago
i’m not going to say what it is, obviously, but i have a troll tech tip that is “MUCH” more dangerous. it is several lines of zsh and it basically removes every image onyour computer or every codee file on your computer, and you need to be pretty familiar with zsh/bash syntax to know it’s a trolltip
so yeah, definitely not posting this one here, i like it here (i left reddit cuz i got sick of it)
Credibly_Human@lemmy.world 2 weeks ago
Its always been a shitty meme aimed at being cruel to new users.
Somehow though people continue to spread the lie that the linux community is nice and welcoming.
Really its a community of professionals, professional elitists, or people who are otherwise so fringe that they demand their os be fringe as well.