The way I see Agentic AI is it’s just a dumber customer service agent that is ready and willing to be scammed and phished. Not my fault if these companies are too stupid to put in proper guardrails.
orclev@lemmy.world 3 weeks ago
Agentic AI is just a buzzword for letting AI do things without human supervision. It’s absolutely a recipe for disaster. You should never let AI do anything you can’t easily undo as it’s guaranteed to screw it up at least part of the time. When all it’s screwing up is telling you that glue would make an excellent topping for pizza that’s one thing, but when it’s emailing your boss that he’s a piece of crap that’s an entirely different scenario.
tidderuuf@lemmy.world 3 weeks ago
Quazatron@lemmy.world 3 weeks ago
I agree with you. I don’t mind local AI searching the web for topics I’m interested in and providing me with news and interesting tidbits. I’m not OK with AI having any kind of permission to run executable code.
Grimy@lemmy.world 3 weeks ago
I think there’s a difference between letting it do somethings and letting it finish something.
It shouldn’t be the one clicking the send button because everything needs to be verified, but it’s fine to have it surf the internet or turn a request into a set number of tasks with a to-do list.
Writing an email with it is a no-go for me though, I avoid it the moment it comes to actually communicating with someone. Using AI strikes me as patronizing.
MangoCats@feddit.it 3 weeks ago
Mechanical key based door lock cylinders are “Agentic AI” - they decide whether or not to allow the tumbler to turn based on the key (code) inserted. They’re out there, in their billions around the world, deciding whether or not to allow people access through doorways WITHOUT HUMAN SUPERVISION!!! They can be easily hacked, they are not to be trusted!!! Furthermore, most key-lock users have no idea how the thing really works, they just stick the key in and try to turn it.
atrielienz@lemmy.world 2 weeks ago
This is just a poor analogy.
A door lock can’t buy up Amazon’s entire stock of tide pods on my credit card.
A door lock can’t turn on someone’s iot oven while they’re out of town.
A door lock can’t publish every email some journalist has ever received to xitter.
A mechanical door lock doesn’t hallucinate extra fingers, and draw them into all the family photos saved on a person’s hard drive.
MangoCats@feddit.it 2 weeks ago
A door lock can’t buy up Amazon’s entire stock of tide pods on my credit card.
But it can let in a burglar who can find your credit card inside and do the same. And why are you giving AI access to your CC#? You’d better post it here in a reply so I can keep it safe for you.
A door lock can’t turn on someone’s iot oven while they’re out of town.
But it can let in neighborhood children who will turn on your gas stove without lighting it while you’re out of town.
A door lock can’t publish every email some journalist has ever received to xitter.
True, the journalist, or his soon-to-be-ex-spouse, can “accidentally” do that themselves - and I suppose the ex-spouse who still has a copy of the key can “fool” the lock with that undisclosed copy of the key while the journalist is out having sushi with his mistress.
A mechanical door lock doesn’t hallucinate extra fingers, and draw them into all the family photos saved on a person’s hard drive.
I’ve worked with AI for a while now, it’s not going to up and hallucinate to do that - unless you ask it to do something related.
atrielienz@lemmy.world 2 weeks ago
But it can let in a burglar who can find your credit card inside and do the same. And why are you giving AI access to your CC#? You’d better post it here in a reply so I can keep it safe for you.
You aren’t giving your door lock access to your credit card information. And it didn’t “let the burglar in” so much as it has a failure ceiling. Meaning that there is more of a chance that a burglar can get in than zero, but less of a chance than if you didn’t have a lock at all. An outside party is circumventing the protections you put into place to protect your credit card number. Or perhaps (possibly) you are circumventing it by accident by leaving the door lock unlocked.
However, in both those cases, the door lock is not doing anything of its own volition, and won’t be doing that outside your control. The AI LLM is doing stuff both of its own volition (perhaps within parameters you set, but more likely outside of parameters you set, but within parameters the company that makes it set and only that to a degree).
You don’t do any banking except in person? Any shopping except in person with cash? Because that’s what you’re suggesting when you say things like “why are you giving it access to your credit card”.
Microsoft is suggesting that they will run “Agentic AI” on the windows 11 computers of hundreds of millions of peoples personal devices in the background without their direct input, and that this AI may download malware or be a threat vector that malicious apps, services, etc can take advantage of. But they’re going to do it anyway.
Microsoft is not installing door locks in my house, and if they tried I’d kindly escort them off the property, by force if necessary.
Prox@lemmy.world 3 weeks ago
It’s Argo Workflows
NuXCOM_90Percent@lemmy.zip 3 weeks ago
No, it isn’t.
As per IBM www.ibm.com/think/topics/agentic-ai
The key part being the last sentence.
Its the idea of moving away from a monolithic (for simplicity’s sake) LLM into one where each “AI” serves a specific purpose. So imagine a case where you have one “AI” to parse your input text and two or three other “AI” to run different models based upon what use case your request falls into.
And… anyone who has ever done any software development (web or otherwise) can tell you: That is just (micro)services. Especially when so many of the “agents” aren’t actually LLMs and are just bare metal code or databases or what have you.
The idea of supervision remains the same. Some orgs care about it. Others don’t. Just like some orgs care about making maintainable code and others don’t.
But yes, it is very much a buzzword.
Catoblepas@piefed.blahaj.zone 3 weeks ago
Hat on top of a hat technology. The underlying problems with LLMs remain unchanged, and “agentic AI” is basically a marketing term to make people think those problems are solved. I realize you probably know this, I’m just kvetching.
Auth@lemmy.world 3 weeks ago
Not really. By breaking down the problem you can adjust the models to the task. There is a lot of work going into this stuff and there are ways to turn down the randomness to get more consistent outputs for simple tasks.
MangoCats@feddit.it 3 weeks ago
This is a tricky one… if you can define good success/failure criteria, then the randomness coupled with an accurate measure of success, is how “AI” like Alpha Go learns to win games, really really well.
In using AI to build computer programs and systems, if you have good tests for what “success” looks like, you’d rather have a fair amount of randomness in the algorithms trying to make things work because when they don’t and they fail, they end up stuck, out of ideas.
floquant@lemmy.dbzer0.com 3 weeks ago
You’re both right imo. LLMs and every subsequent improvement are fundamentally ruined by marketing heads like oh so many things in the history of computing, so even if agentic AI is actually an improvement, it doesn’t matter because everyone is using it to do stupid fucking things.
pinball_wizard@lemmy.zip 2 weeks ago
Yes: shell scripting, which we have had for half a century.