A six year old can read and write Arabic, Chinese, Ge’ez, etc. and yet most people with PhD level experience probably can’t, and it’s probably useless to them. LLMs can do this also. You can count the number of letters in a word, but so can a program written in a few hundred bytes of assembly. It’s completely pointless to make LLMs to do that, as it’d just make them way less efficient than they need to be while adding nothing useful.
Not bizarre at all.
The point isn’t “they can’t do word games therefore they’re useless”, it’s “if this thing is so easily tripped up on the most trivial shit that a 6-year-old can figure out, don’t be going round claiming it has PhD level expertise”.
1rre@discuss.tchncs.de 20 hours ago
skisnow@lemmy.ca 18 hours ago
LOL, it seems like every time I get into a discussion with an AI evangelical, they invariably end up asking me to accept some really poor analogy that, much like an LLM’s output, looks superficially clever at first glance but doesn’t stand up to the slightest bit of scrutiny.
1rre@discuss.tchncs.de 17 hours ago
it’s more that the only way to get some anti AI crusader that there are some uses for it is to put it in an analogy that they have to actually process rather than spitting out an “ai bad” kneejerk.
I’m probably far more anti AI than average, for 95% of what it’s pushed for it’s completely useless, but that still leaves 5% that it’s genuinely useful for that some people refuse to accept.
abir_v@lemmy.world 16 hours ago
I feel this. In my line of work I really don’t like using them for much of anything (programming ofc, like 80% of Lemmy users) because it gets details wrong too often to be useful and I don’t like babysitting.
But when I need a logging message, or to return an error, it’s genuinely a time saver. It’s good at pretty well 5%, as you say.
But using it for art, math, problem solving, any of that kind of stuff that gets tauted around by the business people? Useless, just fully fuckin useless.
TempermentalAnomaly@lemmy.world 16 hours ago
It’s amazing that if you acknowledge that:
- AI has some utility and
- The (now tiresome and sloppy) tests they’re using doesn’t negate 1
You are now an AI evangelist. Just as importantly, the level of investment into AI doesn’t justify #1. And when that realization hits business America, a correction will happen and the people who will be effected aren’t the well off, but the average worker. The gains are for the few, the loss for the many.
Jomega@lemmy.world 16 hours ago
it’s more that the only way to get some anti AI crusader that there are some uses for it
Name three.
echodot@feddit.uk 17 hours ago
So if the AI can’t do it then that’s just proof that the AI is too smart to be able to do it? That’s your arguement is it. Nah, it’s just crap
You think just because you attached it to an analogy that makes it make sense. That’s not how it works, look I can do it.
My car is way too technologically sophisticated to be able to fly, therefore AI doesn’t need to be able to work out how many l Rs are in “strawberry”.
See how that made literally no sense whatsoever.
1rre@discuss.tchncs.de 17 hours ago
Except you’re expecting it to do everything. Your car is too “technically advanced” to walk on the sidewalk, but wait, you can do that anyway and don’t need to reinvent your legs
PixelatedSaturn@lemmy.world 21 hours ago
I don’t want to defend ai again, but it’s a technology, it can do some things and can’t do others. By now this should be obvious to everyone. Except to the people that believe everything commercials tell them.
kouichi@ani.social 20 hours ago
How many people do you think know that AIs are “trained on tokens”, and understand what that means? It’s clearly not obvious to those who don’t, which are roughly everyone.
PixelatedSaturn@lemmy.world 20 hours ago
You don’t have to know about tokens to see what ai can and cannot do.
huppakee@feddit.nl 19 hours ago
Go to an art museum and somebody will say ‘my 6 year old can make this too’, in my view this is a similar fallacy.
sqgl@sh.itjust.works 19 hours ago
332 instances of lawyers in Australia using AI evidence which “hallucinated”.
And this week one was finally punished.
PixelatedSaturn@lemmy.world 18 hours ago
Ok? So, what you are saying is that some lawyers are idiots. I could have told you that before ai existed.
Aceticon@lemmy.dbzer0.com 15 hours ago
It’s not the AIs which are crap, its what they’ve been sold as capable of doing and the reliability of their results that’s massivelly disconnected from reality.
The crap is what a most of the Tech Investor class has pushed to the public about AI.
It’s thus not at all surprising that many who work or manage work in areas were precision and correctness is essential have been deceived into thinking AI can do much of the work for them and it turns out AI can’t really do it because of those precision and correctness requirement.
This will hit more people who are not Tech experts, such as Lawyers, but even some supposedly Tech experts (such as programmers) have been swindled in this way.
There are a ton of great uses for AI, especially stuff other than LLMs, in areas where false positives or false negatives are no big deal, but that’s not were the Make Money Fast push for them is.