Comment on ChatGPT 'got absolutely wrecked' by Atari 2600 in beginner's chess match — OpenAI's newest model bamboozled by 1970s logic

<- View Parent
arc99@lemmy.world ⁨1⁩ ⁨day⁩ ago

An LLM is an ordered series of parameterized / weighted nodes which are fed a bunch of tokens, and millions of calculations later result generates the next token to append and repeat the process. It’s like turning a handle on some complex Babbage-esque machine. LLMs use a tiny bit of randomness when choosing the next token so the responses are not identical each time.

But it is not thinking. Not even remotely so. It’s a simulacrum.

source
Sort:hotnewtop