Comment on [deleted]
Kjell@lemmy.world 1 week ago
I tried Le Chat with the car wash question.
That’s a fun question! Since the car wash is only 50 meters away, walking is definitely the more practical choice. Driving such a short distance would take more time to start the car, drive, and park than it would to simply walk over. Plus, you’ll save a bit of fuel and avoid any unnecessary wear on your car. If you’re already outside and the weather is nice, it’s a great opportunity for a quick stretch!
However, I also asked it with a slightly different phrasing and then the result was different. At least it replies that it might make sense to drive there.
I need to wash my car. The car wash is 50 meters away from my house, should I drive or walk there?
That’s a fun question! Since the car wash is only 50 meters away, walking would be the most practical choice—it’s just a short stroll, and you’d avoid the hassle of starting your car for such a tiny distance. Plus, you’d save a bit of fuel and get a little fresh air! But if you’re planning to wash the car yourself and need to bring supplies, driving might make sense. Are you using an automatic car wash, or are you doing it by hand?
SuspciousCarrot78@lemmy.world 1 week ago
That’s the thing. It’s not that the LLMs can’t solve the problem…it’s the way they’re optimized.
To give the crude analogy: if most LLMs are set up for the equivalent of typing BOOBS on a calculator (the big players are happy to keep it that way; more engagement, smoother vibes etc), constraints first approach is what happens when you use a calculator to do actual maths.
2+2=4 (always, unless shrooms are in play).
I said this before, so pardon me for being gauche and quoting myself
Every reasoning system needs premises - you, me, a 4yr old. You cannot deduce conclusions from nothing. Demanding that a reasoner perform without premises (note: constraints) isn’t a test of reasoning, it’s a demand for magic. Premise-dependence isn’t a bug, it’s the definition.
People see things like Le-Chat fall over and go “Ha ha. Auto-complete go brrr”. That’s lazy framing. A calculator is “just” voltage differentials on silicon. That description is true and also tells you nothing useful about whether it’s doing arithmetic.
My argument is this: the question of whether something is or isn’t reasoning IS NOT answered by describing what it runs on; it’s answered by looking at whether it exhibits the structural properties of reasoning. I think LLMs can do that…they’re just borked (…intentionally?). Case in point - see my top post.
I literally “Tony Stanked” my way to it. Now imagine if someone with resources and a budget did it.