Yes, that makes much more sense.
Comment on An Analysis of DeepMind's 'Language Modeling Is Compression' Paper
AbouBenAdhem@lemmy.world 1 year agoFirstly—maybe what we consider an “association” is actually an indicator that our brains are using the same internal tokens to store/compress the memories.
But what I was thinking of specifically is narrative memories: our brains don’t store them frame-by-frame like video, but rather, they probably store only key elements and use their predictive ability to extrapolate the omitted elements on demand.
abhi9u@lemmy.world 1 year ago
InvertedParallax@lemm.ee 1 year ago
No, because our brains also use hierarchical activation for association, which is why if we’re talking about bugs and I say “I got a B” you assume its a stinging insect, not a passing grade.
If it was simple word2vec we wouldn’t have that additional means of noise suppression.
GenderNeutralBro@lemmy.sdf.org 1 year ago
This seems likely to me. The common saying is that “you hear what you want to hear”, but I think more accurately it’s “you remember what has meaning to you”. Recently there was a study that even visual memory was tightly integrated with spoken language: www.science.org/doi/10.1126/sciadv.adh0064
However, there’s a lot of variation in memory among humans. See: The Mind of a Mnemonist.