Comment on Major shifts at OpenAI spark skepticism about impending AGI timelines

<- View Parent
MentalEdge@sopuli.xyz ⁨3⁩ ⁨months⁩ ago

Hardly.

How did you interpret the issues inherent in the structure of how LLMs work to be a hardware problem?

An AGI should be able to learn the basics of physics from a single book, the way a human can. But LLMs need terabytes of data to even get started, and once trained, adding to their knowledge by simply telling them things doesn’t actually integrate that information into the model itself in any way.

Even if your tried to make it work that way, it wouldn’t work, because a single sentence can’t significantly alter the model to match way humans can internalise a concept being communicated to them in a single conversation.

source
Sort:hotnewtop