Comment on AGI achieved 🤖

<- View Parent
merc@sh.itjust.works ⁨3⁊ ⁨days⁊ ago

Can you explain the difference between understanding the question and generating the words that might logically follow?

I mean, it’s pretty obvious. Take someone like Rowan Atkinson whose death has been misreported multiple times. If you ask a computer system “Is Rowan Atkinson Dead?” you want it to understand the question and give you a yes/no response based on actual facts in its database. A well designed program would know to prioritize recent reports as being more authoritative than older ones. It would know which sources to trust, and which not to trust.

An LLM will just generate text that is statistically likely to follow the question. Because there have been many hoaxes about his death, it might use that as a basis and generate a response indicating he’s dead. But, because those hoaxes have also been debunked many times, it might use that as a basis instead and generate a response indicating that he’s alive.

So, if he really did just die and it was reported in reliable fact-checked news sources, the LLM might say “No, Rowan Atkinson is alive, his death was reported via a viral video, but that video was a hoax.”

but why should we assume that shows some lack of understanding

Because we know what “understanding” is, and that it isn’t simply finding words that are likely to appear following the chain of words up to that point.

source
Sort:hotnewtop