I’m pleased to inform you that you are wrong.
A large language model works by predicting the statistically-likely next token in a string of tokens, and repeating until it’s statistically-likely that its response has finished.
You can think of a token as a word but in reality tokens can be individual characters, parts of words, whole words, or multiple words in sequence.
The only addition these “agentic” models have is special purpose tokens. One that means “launch program”, for example.
That’s literally how it works.
AI. Cannot. Think.
_cnt0@sh.itjust.works 3 days ago
That’s a lot of bullshit.
IronBird@lemmy.world 3 days ago
this bubble can’t pop soon enough was dot.com this annoying too?
pinball_wizard@lemmy.zip 3 days ago
Surprisingly, it was not this annoying.
It was very annoying, but at least there was an end in sight, and some of it was useful.
We all knew that www.only-socks-and-only-for-cats.com was going away, but eBay was still pretty great.
In contrast, we’re all standing around today looking at many times the world’s GDP being bet on a pretty good autocomplete algorithm waking up and becoming fully sentient.
It feels like a different level of irrational.
_cnt0@sh.itjust.works 3 days ago
To me, this is more annoying. But I might have been too young and naïve back then.
realitista@lemmus.org 3 days ago
If you can’t see it you’re not paying attention.
_cnt0@sh.itjust.works 3 days ago
If you’re seeing it, you’re delusional.