Comment on AGI achieved đ¤
jsomae@lemmy.ml â¨2⊠â¨days⊠agoThe LLM isnât aware of its own limitations in this regard. The specific problem of getting an LLM to know what characters a token comprises has not been the focus of training. Itâs a totally different kind of error than other hallucinations, itâs almost entirely orthogonal, but other hallucinations are much more important to solve, whereas being able to count the number of letters in a word or add numbers together is not very important, since as you point out, there are already programs that can do that.
outhouseperilous@lemmy.dbzer0.com â¨2⊠â¨days⊠ago
The most convincing arguments that llms are like humans arenât that llmâs are good, but that humans are just unrefrigerated meat and personhood is a delusion.
jsomae@lemmy.ml â¨2⊠â¨days⊠ago
This might well be true yeah. But thatâs still good news for AI companies who want to replace humans â barâs lower than they thought.
outhouseperilous@lemmy.dbzer0.com â¨2⊠â¨days⊠ago
And why we should fight them tooth and nail, yes.
Theyâre not just replacing us, theyâre making us suck more so itâs an easy sell.
jsomae@lemmy.ml â¨2⊠â¨days⊠ago
Well yeah. Youâre preaching to the choir lol.