We kinda were just temporal auto-complete, though
Comment on How does AI-based search engines know legit sources from BS ones ?
Apepollo11@lemmy.world 3 days agoAt the end of the day, isn’t that just how we work, though? We tokenise information, make connections between these tokens and regurgitate them in ways that we’ve been trained to do.
Even our “novel” ideas are always derivative of something we’ve encountered. They have to be, otherwise they wouldn’t make any sense to us.
Describing current AI models as “Fancy auto-complete” feels like describing electric cars as “fancy Scalextric”. Neither are completely wrong, but they’re both massively over-reductive.
Sidhean@lemmy.world 3 days ago
swordgeek@lemmy.ca 2 days ago
I’ve thought a lot about this over the last few years, and have decided there’s one critical distinction: Understanding.
When we combine knowledge to come to a conclusion, we understand (or even misunderstand) that knowledge we’re using. We understand the meaning of our conclusion.
LLMs don’t understand. They programmatically and statistically combine data - not knowledge - to come up with a likely outcome. They are non-deterministic auto-complete bots, and that is ALL they are. There is no intelligence, and the current LLM framework will never lead to actual intelligence.
They’re parlour tricks at this point, nothing more.