Comment on Emergent introspective awareness in large language models

<- View Parent
kromem@lemmy.world ⁨3⁩ ⁨days⁩ ago

A few months back it was found that when writing rhyming couplets the model has already selected the second rhyming word when it was predicting the first word of the second line, meaning the model was planning the final rhyme tokens at least one full line ahead and not just predicting that final rhyme when it arrived at that token.

It’s probably wise to consider this finding in concert with the streetlight effect.

source
Sort:hotnewtop