Comment on An Analysis of DeepMind's 'Language Modeling Is Compression' Paper

<- View Parent
AbouBenAdhem@lemmy.world ⁨1⁩ ⁨year⁩ ago

Firstly—maybe what we consider an “association” is actually an indicator that our brains are using the same internal tokens to store/compress the memories.

But what I was thinking of specifically is narrative memories: our brains don’t store them frame-by-frame like video, but rather, they probably store only key elements and use their predictive ability to extrapolate the omitted elements on demand.

source
Sort:hotnewtop