What we have done is invented massive, automatic, no holds barred pattern recognition machines. LLMs use detected patterns in text to respond to questions. Image recognition is pattern recognition, with some of those patterns named things (like “cat”, or “book”). Image generation is a little different, but basically just flips the image recognition on its head, and edits images to look more like the patterns that it was taught to recognize.
This can all do some cool stuff. There are some very helpful outcomes. It’s also (automatically, ruthlessly, and unknowingly) internalizing biases, preferences, attitudes and behaviors from the billion plus humans on the internet, and perpetuating them in all sorts of ways, some of which we don’t even know to look for.
This makes its potential applications in medicine rather terrifying. Do thousands of doctors all think women are lying about their symptoms? Well, now your AI does too. Do thousands of doctors suggest more expensive treatments for some groups, and less expensive for others? AI can find that pattern.
This is also true in law (I know there’s supposed to be no systemic bias in our court systems, but AI can find those patterns, too), engineering (any guesses how human engineers change their safety practices based on the area a bridge or dam will be installed in? AI will find out for us), etc, etc.
The thing that makes AI bad for some use cases is that it never knows which patterns it is supposed to find, and which ones it isn’t supposed to find. Until we have better tools to tell it not to notice some of these things, and to scrub away a lot of the randomness that’s left behind inside popular models, there’s severe constraints on what it should be doing.
FunkPhenomenon@lemmy.zip 7 months ago
LLMs as AI is just a marketing term. there’s nothing “intelligent” about “AI”
CeeBee@lemmy.world 7 months ago
Yes there is. You just mean it doesn’t have “high” intelligence. Or maybe you mean to say that there’s nothing sentient or sapient about LLMs.
Some aspects of intelligence are:
LLMs definitely hit basically all of these points.
Most people have been told that LLMs “simply” provide a result by predicting the next word that’s most likely to come next, but this is a completely reductionist explaining and isn’t the whole picture.
SkybreakerEngineer@lemmy.world 7 months ago
Other than maybe pattern recognition, they literally have no mechanism to do any of those things. People say that it recursively spits out the next word, because that is literally how it works on a coding level. It’s called an LLM for a reason.
FaceDeer@fedia.io 7 months ago
The term "artificial intelligence" was established in 1956 and applies to a broad range of algorithms. You may be thinking of Artificial General Intelligence, AGI, which is the more specific "thinks like we do" sort that you see in science fiction a lot. Nobody is marketing LLMs as AGI.
FunkPhenomenon@lemmy.zip 7 months ago
yeah, I guess thats how I was interpreting it. dunno, I see a lot of articles about how its super easy to crack these LLMs using outside of the box thinking (ascii art text to get instructions on how to make a bomb, etc). that doesnt really scream “intelligent” to me.
Even_Adder@lemmy.dbzer0.com 7 months ago
This is a popular sentiment, but you can still do impressive things with it even if it isn’t.
FaceDeer@fedia.io 7 months ago
It's some weird semantic nitpickery that suddenly became popular for reasons that baffle me. "AI" has been used in videogames for decades and nobody has come out of the woodwork to "um, actually" it until now. I get that people are frightened of AI and would like to minimize it but this is a strange way to do it.
At least "stochastic parrot" sounded kind of amusing.