Comment on Everyone Is Judging AI by These Tests. But Experts Say They’re Close to Meaningless.
sunbeam60@lemmy.one 4 months agoWell, brains are a network of neurons (we can evidentially verify this) trained on … eyes, ears, sense of touch, taste, smell and balance. LLMs are a network of neurons trained on text and images.
It’s not given that this results in the same way of dealing with language, given the wider set of input data for a human, but it’s not given that it doesn’t either.
zbyte64@awful.systems 4 months ago
Humans predict things by assigning meaning to events and things, because in nature, we’re constantly trying to guess what other creatures are planning. An LLM does not hypothesize what your plans are when you communicate to it, it’s just trying to predict the next set of tokens with the greatest reward value. Even if you were to use literal human neurons to build your LLM, you would still have a stochastic parrot.