Agentic AI is just a buzzword for letting AI do things without human supervision
No, it isn’t.
As per IBM www.ibm.com/think/topics/agentic-ai
Agentic AI is an artificial intelligence system that can accomplish a specific goal with limited supervision. It consists of AI agents—machine learning models that mimic human decision-making to solve problems in real time. In a multiagent system, each agent performs a specific subtask required to reach the goal and their efforts are coordinated through AI orchestration.
The key part being the last sentence.
Its the idea of moving away from a monolithic (for simplicity’s sake) LLM into one where each “AI” serves a specific purpose. So imagine a case where you have one “AI” to parse your input text and two or three other “AI” to run different models based upon what use case your request falls into.
And… anyone who has ever done any software development (web or otherwise) can tell you: That is just (micro)services. Especially when so many of the “agents” aren’t actually LLMs and are just bare metal code or databases or what have you.
The idea of supervision remains the same. Some orgs care about it. Others don’t. Just like some orgs care about making maintainable code and others don’t.
But yes, it is very much a buzzword.
Catoblepas@piefed.blahaj.zone 9 hours ago
Hat on top of a hat technology. The underlying problems with LLMs remain unchanged, and “agentic AI” is basically a marketing term to make people think those problems are solved. I realize you probably know this, I’m just kvetching.
Auth@lemmy.world 9 hours ago
Not really. By breaking down the problem you can adjust the models to the task. There is a lot of work going into this stuff and there are ways to turn down the randomness to get more consistent outputs for simple tasks.
MangoCats@feddit.it 8 hours ago
This is a tricky one… if you can define good success/failure criteria, then the randomness coupled with an accurate measure of success, is how “AI” like Alpha Go learns to win games, really really well.
In using AI to build computer programs and systems, if you have good tests for what “success” looks like, you’d rather have a fair amount of randomness in the algorithms trying to make things work because when they don’t and they fail, they end up stuck, out of ideas.
floquant@lemmy.dbzer0.com 6 hours ago
You’re both right imo. LLMs and every subsequent improvement are fundamentally ruined by marketing heads like oh so many things in the history of computing, so even if agentic AI is actually an improvement, it doesn’t matter because everyone is using it to do stupid fucking things.
Auth@lemmy.world 4 hours ago
Yeah like stringing 5 chatgpt’s together saying “you are scientist you are product lead engineer etc” is dumb but stringing together chatgpt into a coded tool into a vision model into a specific small time LLM is an interesting new way to build workflows for complex and dynamic tasks.