My favourite way to liken LLMs to something else is to autocorrect, it just guesses, and it gets stuff wrong, and it is constantly being retrained to recognise your preferences, such as it starting to not correct fuck to duck for instance.
And it’s funny and sad how some people think these LLMs are their friends, like no, it’s a collosally sized autocorrect system that you cannot comprehend, it has no consciousness, it lacks any thought, it just predicts from a prompt.
Opinionhaver@feddit.uk 1 year ago
Why is AGI not in reach? What insight do you have on the matter than you can so confidently make an absolute statement like that?
oakey66@lemmy.world 1 year ago
Experts in the field.
open.spotify.com/episode/4IoS9rBDq7GLwsgccKqCti?s…
I also work in the industry. In particular I work in data analytics consulting. It’s all hype to sell consulting hours and compute.
Opinionhaver@feddit.uk 1 year ago
Then please explain your reasoning. Statements alone are meaningless if you’re unable to back them up explanations.