Exactly. Nothiing technical about it: they simply produce the statistically most likely token (in their training model) to follow a given list of tokens.
Any information contained in their output (other than the fact that each of the tokens is probably the most statistically likely to appear after the previous ones in the texts used as their models, which I imagine could be useful for philologists) is purely circumstantial, and was already contained in their training model.
There’s no reasoning involved in the process (other than possibly in the writing of the texts in their training mode if they predate LLM, if we’re feeling optimistic about human intelligence), nor any mechanism in the LLM for reasoning to take place.
They are as far from AI as Markov chains were, just slightly more correct in their token likelihood predictions and several orders of magnitude more costly.
And them being sold as AI doesn’t make them any closer, it just means the people and companies selling them are scammers.
survirtual@lemmy.world 1 day ago
“Technically”? Wrong word. By all technical measures, they are technically 100% AI.
What you might be trying to say is they aren’t AGI (artificial general intelligence). I would argue they might just be AGI. For instance, they can reason about what they are better than you can, while also being able to draw a pelican riding a unicycle.
What they certainly aren’t is ASI (artificial super-intelligence). You can say they technically aren’t ASI and you would be correct. ASI would be capable of improving itself faster than a human would be capable.