AI is incredibly powerful and incredibly easy to use, which means it’s a piece of cake to use AI to do incredibly stupid things. Your guy is just bad with AI, which means he doesn’t know how to talk to a computer in his native language
Comment on Oracle made a $300 billion bet on OpenAI. It's paying the price.
chronicledmonocle@lemmy.world 2 days agoAs someone who works in network engineering support and has seen Claude completely fuck up people’s networks with bad advice: LOL.
Literally had an idiot just copying and pasting commands from Claude into their equipment and brought down a network of over 1000 people the other day.
It hallucinated entire executables that didn’t exist. It asked them to create init scripts for services that already had one. It told them to bypass the software UI, that had the functionality they needed, and start adding routes directly to the system kernel.
Every LLM is the same bullshit guessing machine.
alias_qr_rainmaker@lemmy.world 2 days ago
9bananas@feddit.org 2 days ago
no, AI just sucks ass with any highly customized environment, like network infrastructure, because it has exactly ZERO capacity for on-the-fly learning.
it can somewhat pretend to remember something, but most of the time ot doesn’t work, and then people are so, so surprised when it spits out the most ridiculous config for a router, because all it did was string together the top answers on stack overflow from a decade ago, stripping out any and all context that makes it make sense, and presents it as a solution that seems plausible, but absolutely isn’t.
LLMs are literally design to trick people into thinking what they write makes sense.
they have no concept of actually making sense.
this is not am exception, or an improper use of the tech.
it’s an inherent, fundamental flaw.
alias_qr_rainmaker@lemmy.world 2 days ago
whenever someone says AI doesn’t work they’re just saying that they don’t know how to get a computer to do their work for them. they can’t even do laziness right
9bananas@feddit.org 2 days ago
yeah, no… that’s not at all what i said.
i didn’t say “AI doesn’t work”, i said it works exactly as expected: producing bullshit.
i understand perfectly well how to get it to spit out useful information, because i know what i can and cannot ask it about.
I’d much rather not use it, but it’s pretty much unavoidable now, because of how trash search results have become, specifically for technical subjects.
what absolutely doesn’t work is asking AI to perform highly specific, production critical configurations on live systems.
you CAN use it to get general answers to general questions.
“what’s a common way to do this configuration?” works well enough.
“fix this config file for me!” doesn’t work, because it has no concept of what that means in your specific context. and no amount of increasingly specific prompts will ever get you there. …unless “there” is an utter clusterfuck, see the OP for proof…
Shanmugha@lemmy.world 2 days ago
As a dev: lol. Do it again, you are good at entertaining
naeap@sopuli.xyz 2 days ago
Native language == assembly?
chronicledmonocle@lemmy.world 2 days ago
Generative AI has an average error rate of 9-13%. Nobody should trust it wholesale and what it spits out.
It has some excellent use cases. Vibe code/sysadmin/netadmin’ing are not one of those things.
ayyy@sh.itjust.works 1 day ago
Where does this 9-13% number come from?
chronicledmonocle@lemmy.world 1 day ago
There was a study done several months ago. I’ll try to find the source again and link it here in a comment edit.
alias_qr_rainmaker@lemmy.world 1 day ago
I don’t trust it wholesale. No one who knows what they’re talking about trusts it wholesale. Hallucination rates vary depending on who you ask. And you’re wrong about vibe coding, it works great if you’re working on some random side project and not working with a team that has to push to production
olympicyes@lemmy.world 2 days ago
Functions with arguments that don’t do anything… hey Claude why did you do that? Good catch…!