Comment on Mind-reading AI can translate brainwaves into written text: Using only a sensor-filled helmet combined with artificial intelligence, a team of scientists has announced they can turn a person’s thou...

<- View Parent
knightly@pawb.social ⁨1⁩ ⁨year⁩ ago

If LLMs were just lossy encodings of their database they wouldn’t be able to answer any questions outside of there training set.

Of course they could, in the same way that hitting the autocomplete key can finish a half-completed sentence you’ve never written before.

The fact that models can produce useful outputs from novel inputs is the whole reason why we build models. Your argument is functionally equivalent to the claim that wind tunnels are intelligent because they can characterise the aerodynamics of both old and new kinds of planes.

How do you explain the hallucinations if the llm is just a complex lookup engine? You can’t lookup something you’ve never seen.

For the same reason that a random number generator is capable of producing never-before-seen strings of digits. LLM inference engines have a property called “temperature” that governs how much randomness is injected into their responses:

Image

source
Sort:hotnewtop