Ferrari
So expensive, looks great, takes significant capital to maintain, and anyone who has one uses something else when they actually need to do something useful.
Comment on Oracle made a $300 billion bet on OpenAI. It's paying the price.
alias_qr_rainmaker@lemmy.world 2 days agoNot sure, but I hear the Claude Super Duper Extreme Fucking Pro ($200/month) is like the Ferrari of LLM assisted coding
Ferrari
So expensive, looks great, takes significant capital to maintain, and anyone who has one uses something else when they actually need to do something useful.
it literally doesn’t cost as much as a ferrari
What’s with tech people always stating (marketing) things as akin to high end sports cars. The state of AI is more like arguing over which donkey is best, lol.
the Ferrari of LLM assisted coding So…4th in the Constructors and 5th +6th in the Driver’s Championships?
unfortunately your code placed last in the driver’s so AI would be a HUGE step up for you
chronicledmonocle@lemmy.world 2 days ago
As someone who works in network engineering support and has seen Claude completely fuck up people’s networks with bad advice: LOL.
Literally had an idiot just copying and pasting commands from Claude into their equipment and brought down a network of over 1000 people the other day.
It hallucinated entire executables that didn’t exist. It asked them to create init scripts for services that already had one. It told them to bypass the software UI, that had the functionality they needed, and start adding routes directly to the system kernel.
Every LLM is the same bullshit guessing machine.
olympicyes@lemmy.world 2 days ago
Functions with arguments that don’t do anything… hey Claude why did you do that? Good catch…!
alias_qr_rainmaker@lemmy.world 2 days ago
AI is incredibly powerful and incredibly easy to use, which means it’s a piece of cake to use AI to do incredibly stupid things. Your guy is just bad with AI, which means he doesn’t know how to talk to a computer in his native language
9bananas@feddit.org 2 days ago
no, AI just sucks ass with any highly customized environment, like network infrastructure, because it has exactly ZERO capacity for on-the-fly learning.
it can somewhat pretend to remember something, but most of the time ot doesn’t work, and then people are so, so surprised when it spits out the most ridiculous config for a router, because all it did was string together the top answers on stack overflow from a decade ago, stripping out any and all context that makes it make sense, and presents it as a solution that seems plausible, but absolutely isn’t.
LLMs are literally design to trick people into thinking what they write makes sense.
they have no concept of actually making sense.
this is not am exception, or an improper use of the tech.
it’s an inherent, fundamental flaw.
alias_qr_rainmaker@lemmy.world 2 days ago
whenever someone says AI doesn’t work they’re just saying that they don’t know how to get a computer to do their work for them. they can’t even do laziness right
naeap@sopuli.xyz 2 days ago
Native language == assembly?
chronicledmonocle@lemmy.world 2 days ago
Generative AI has an average error rate of 9-13%. Nobody should trust it wholesale and what it spits out.
It has some excellent use cases. Vibe code/sysadmin/netadmin’ing are not one of those things.
ayyy@sh.itjust.works 2 days ago
Where does this 9-13% number come from?
alias_qr_rainmaker@lemmy.world 2 days ago
I don’t trust it wholesale. No one who knows what they’re talking about trusts it wholesale. Hallucination rates vary depending on who you ask. And you’re wrong about vibe coding, it works great if you’re working on some random side project and not working with a team that has to push to production