Comment on AI agents wrong ~70% of time: Carnegie Mellon study
Melvin_Ferd@lemmy.world 1 week agoThis is crazy. I’ve literally been saying they are fallible. You’re saying your professional fed and LLM some type of dataset. So I can’t really say what it was you’re trying to accomplish but I’m just arguing that trying to have it process data is not what they’re trained to do. LLM are incredible tools and I’m tired of trying to act like they’re not because people keep using them for things they’re not built to do. It’s not a fire and forget thing. It does need to be supervised and verified. It’s not exactly an answer machine. But it’s so good at parsing text and documents, summarizing, formatting and acting like a search engine that you can communicate with rather than trying to grok some arcane sentence. Its power is in language applications.
davidagain@lemmy.world 1 week ago
No
Melvin_Ferd@lemmy.world 6 days ago
If it’s so bad as if you say, could you give an example of a prompt where it’ll tell you incorrect information.
davidagain@lemmy.world 6 days ago
It’s like you didn’t listen to anything I ever said, or you discounted everything I said as fiction, but everything your dear LLM said is gospel truth in your eyes. It’s utterly irrational. You have to be trolling me now.
Melvin_Ferd@lemmy.world 6 days ago
Should be easy if it’s that bad though