Have you met many people?
Most people’s entire lives are a form of autocomplete.
_cnt0@sh.itjust.works 4 days ago
With AI, now it does the thinking for you […]
No, it doesn’t. It’s just mimikry. Autocomplete on steroids.
Have you met many people?
Most people’s entire lives are a form of autocomplete.
Obvious non-argument is obvios.
My father is convinced that humans and dinosaurs coexisted and told me that ai proved that to him. So… people do let it think for them.
So he let’s the “AI” do the hallucinating for him.
Yep lol.
realitista@lemmus.org 4 days ago
This was true last year. But they are cranking along the ARC-AGI benchmarks designed specifically to test the kind of things that cannot be done by just regurgitating training data.
On GPT 3 I was getting a lot of hallucinations and wrong answers. On the current version of Gemini, I really haven’t been able to detect any errors in things I’ve asked it. They are doing math correctly now, researching things well and putting together thoughts correctly. Even photos that I couldn’t get old models to generate now are coming back pretty much exactly as I ask.
I was sort of holding out hopes that LLM’s would peak somewhere just below being really useful. But with RAGs and agentic approaches, it seems that they will sidestep the vast majority of problems that LLM’s have on their own and be able to put together something that is better at even very good humans at most tasks.
I hope I’m wrong, but it’s getting pretty hard to bank on that old narrative that they are just fancy autocomplete that can’t think anymore.
_cnt0@sh.itjust.works 4 days ago
That’s a lot of bullshit.
IronBird@lemmy.world 4 days ago
this bubble can’t pop soon enough was dot.com this annoying too?
pinball_wizard@lemmy.zip 4 days ago
Surprisingly, it was not this annoying.
It was very annoying, but at least there was an end in sight, and some of it was useful.
We all knew that www.only-socks-and-only-for-cats.com was going away, but eBay was still pretty great.
In contrast, we’re all standing around today looking at many times the world’s GDP being bet on a pretty good autocomplete algorithm waking up and becoming fully sentient.
It feels like a different level of irrational.
_cnt0@sh.itjust.works 4 days ago
To me, this is more annoying. But I might have been too young and naïve back then.
realitista@lemmus.org 4 days ago
If you can’t see it you’re not paying attention.
_cnt0@sh.itjust.works 4 days ago
If you’re seeing it, you’re delusional.
Cevilia@lemmy.blahaj.zone 3 days ago
I’m pleased to inform you that you are wrong.
A large language model works by predicting the statistically-likely next token in a string of tokens, and repeating until it’s statistically-likely that its response has finished.
You can think of a token as a word but in reality tokens can be individual characters, parts of words, whole words, or multiple words in sequence.
The only addition these “agentic” models have is special purpose tokens. One that means “launch program”, for example.
That’s literally how it works.
AI. Cannot. Think.
realitista@lemmus.org 3 days ago
…And what about non LLM models like diffusion models, VL-JEPA, SSM, VLA, SNN? Just because you are ignorant of what’s happening in the industry and repeating a narrative that worked 2 years ago doesn’t make it true.
And even with LLM’s, even if they aren’t “thinking”, but produce as good or better results than real human “thinking” in major domains, does it even matter? The fact is that there will be many types of models working in very different ways working together and together will be beating humans at tasks that are uniquely human.
sukhmel@programming.dev 2 days ago
Yeah those also can’t think, and it will not change soon
The real problem though is not if LLM can think or not, it’s that people will interact with it as if it can, and will let it do the decision making even if it’s not far from throwing dice