Comment on AI agents wrong ~70% of time: Carnegie Mellon study
chaonaut@lemmy.4d2.org 5 days agoIt questionable to measure these things as being reflective of AI, because what AI is changes based on what piece of tech is being hawked as AI, because we’re really bad at defining what intelligence is and isn’t. You want to claim LLMs as AI? Go ahead, but you also adopt the problems of LLMs as the problems of AIs. Defining AI and thus its metrics is a moving target. When we can’t agree to what is is, we can’t agree to what it can do.
surph_ninja@lemmy.world 5 days ago
Again, you only say it’s a moving target to dispel anything favorable towards AI. Then you do a complete 180 when it’s negative reporting on AI. Makes your argument meaningless, if you can’t even stick to your own point.
chaonaut@lemmy.4d2.org 5 days ago
I mean, I argue that we aren’t anywhere near AGI. Maybe we have a better chatbot and autocomplete than we did 20 years, but calling that AI? It doesn’t really track, does it? With how bad they are at navigating novel situations? With how much time, energy and data it takes to eek out just a tiny bit more model fitness? Sure, these tools are pretty amazing for what they are, but general intelligences, they are not.
surph_ninja@lemmy.world 5 days ago
No one’s claiming these are AGI. Again, you keep having to deflect to irrelevant arguments.
chaonaut@lemmy.4d2.org 4 days ago
So, are you discussing the issues with LLMs specifically, or are you trying to say that AIs are more than just the limitations of LLMs?