Comment on [deleted]

<- View Parent
SuspciousCarrot78@lemmy.world ⁨1⁩ ⁨week⁩ ago

That’s the thing. It’s not that the LLMs can’t solve the problem…it’s the way they’re optimized.

To give the crude analogy: if most LLMs are set up for the equivalent of typing BOOBS on a calculator (the big players are happy to keep it that way; more engagement, smoother vibes etc), constraints first approach is what happens when you use a calculator to do actual maths.

2+2=4 (always, unless shrooms are in play).

I said this before, so pardon me for being gauche and quoting myself

Every reasoning system needs premises - you, me, a 4yr old. You cannot deduce conclusions from nothing. Demanding that a reasoner perform without premises (note: constraints) isn’t a test of reasoning, it’s a demand for magic. Premise-dependence isn’t a bug, it’s the definition.

People see things like Le-Chat fall over and go “Ha ha. Auto-complete go brrr”. That’s lazy framing. A calculator is “just” voltage differentials on silicon. That description is true and also tells you nothing useful about whether it’s doing arithmetic.

My argument is this: the question of whether something is or isn’t reasoning IS NOT answered by describing what it runs on; it’s answered by looking at whether it exhibits the structural properties of reasoning. I think LLMs can do that…they’re just borked (…intentionally?). Case in point - see my top post.

I literally “Tony Stanked” my way to it. Now imagine if someone with resources and a budget did it.

source
Sort:hotnewtop