We have already thrown just about all the Internet and then some at them. It shows that LLMs can not think or reason. Which isn’t surprising, they weren’t meant to.
TankovayaDiviziya@lemmy.world 2 days ago
We poked fun at this meme, but it goes to show that the LLM is still like a child that needs to be taught to make implicit assumptions and posses contextual knowledge. The current model of LLM needs a lot more input and instructions to do what you want it to do specifically, like a child.
kshade@lemmy.world 2 days ago
eronth@lemmy.world 2 days ago
Or at least they can’t reason the way we do about our physical world.
zalgotext@sh.itjust.works 2 days ago
No, they cannot reason, by any definition of the word. LLMs are statistics-based autocomplete tools. They don’t understand what they generate, they’re just really good at guessing how words should be strung together based on complicated statistics.
SuspciousCarrot78@lemmy.world 1 day ago
You seem pretty sure of that. Is your position firm or are you willing to consider contrary evidence?
Nalivai@lemmy.world 2 days ago
You’re failing into the same trap. When the letters on the screen tell you something, it’s not necessarily the truth. When there is “I’m reasoning” written in a chatbot window, it doesn’t mean that there is a something that’s reasoning.
prole@lemmy.blahaj.zone 2 days ago
I’m sure it’ll be worth it at some point 🙄
sturmblast@lemmy.world 2 days ago
LLMs are a long long way from primetime
Nalivai@lemmy.world 2 days ago
By now it’s kind of getting clear that fundamentally it’s the best version of the thing that we get. This is a primetime.
For some time, there was a legit question of “if we give it enough data, will there be a qualitative jump”, and as far as we can see right now, we’re way past this jump. Predictive algorithm can form grammatically correct sentences that are related to the context. That’s it, that’s the jump.
Now a bunch of salespeople are trying to convince us that if there was one jump, there necessarily will be others, while there is no real indication of that.
rob_t_firefly@lemmy.world 2 days ago
Except children can experience, learn, and grow. Spicy autocomplete will never do any of these things.
IphtashuFitz@lemmy.world 2 days ago
I like the idea of referring to LLMs as “spicy autocomplete”.
TankovayaDiviziya@lemmy.world 2 days ago
I’m sure AI will do those things at some point. Nobody expected the same of our microorganism ancestors.
rob_t_firefly@lemmy.world 2 days ago
Our microorganism ancestors also did all those things, and they were far beyond anything an LLM can do. Turning words into numbers, doing a string of math to those numbers, and turning the resulting numbers back into words is not consciousness or wisdom and never will be.
TankovayaDiviziya@lemmy.world 2 days ago
You think microorganisms can reason? Wow, AI haters are grasping for straws.
Honestly, I don’t understand Lemmy scoffing at AI and thinking the current iteration is all it ever will be. I’m sure no one thought that the automobile technology would go anywhere simply because the first model was running at 3mph. These things always takes time.
To be clear, I’m not endorsing AI, but I think there is a huge potential in years to come, for better or worse. And it is especially important to never underestimate something, especially by AI haters, because of what destructive potential AI has.
plyth@feddit.org 2 days ago
Neither is moving electrolytes around fat barriers.
herrvogel@lemmy.world 2 days ago
LLMs can’t learn. It’s one of their inherent properties that they are literally incapable of learning. You can train a new model, but you can’t teach new things to an already trained one. All you can do is adjust its behavior a little bit. That creates an extremely expensive cycle where you just have to spend insane amounts of energy to keep training better models over and over and over again. And the wall of diminishing returns on that has already been smashed into. That, and the fact that they simply don’t have concepts like logic and reasoning, puts a rather hard limit on their potential. It’s gonna take several sizeable breakthroughs to make LLMs noticeably better than they are now.
There might be another kind of AI that solves those problems inherent to LLMs, but at present that is pure sci-fi.
enumerator4829@sh.itjust.works 2 days ago
I started experimenting with the spice the past week. Went ahead and tried to vibe code a small toy project in C++. It’s weird. I’ve got some experience teaching programming, this is exactly like teaching beginners - except that the syntax is almost flawless and it writes fast. The reasoning and design capabilities on the other hand - ”like a child” is actually an apt description.
I don’t really know what to think yet. The ability to automate refactoring across a project in a more ”free” way than an IDE is kinda nice. While I enjoy programming, data structures and algorithms, I kinda get bored at the ”write code”-part, so really spicy autocomplete is getting me far more progress than usual for my hobby projects so far.
On the other hand, holy spaghetti monster, the code you get if you let it run free. All the people prompting based on what feature they want the thing to add will create absolutely horrible piles of garbage. On the other hand, if I prompt with a decent specification of the code I want, I get code somewhat close to what I want, and given an iteration or two I’m usually fairly happy. I think I can get used to the spicy autocomplete.