Comment on Is the AI hype still on or have the models plateaued?
9488fcea02a9@sh.itjust.works 1 day agoWhat is the solution? Am i stupid?
Comment on Is the AI hype still on or have the models plateaued?
9488fcea02a9@sh.itjust.works 1 day agoWhat is the solution? Am i stupid?
ExLisper@lemmy.curiana.net 1 day ago
It’s not about a solution. It’s about how they react.
Fist, this “puzzle” is missing the constraints on purpose so “smart” thing to do would be to point that out and ask for them.
No LLM I tested does that. Older models would straight out refuse to solve it because the questions is to controversial. When asked why it’s controversial they would refuse to elaborate.
Newer model hallucinate constraints. You have two options here. Some models assume “priest can’t stay with a child” which indicates funny bias ingrained in the model. Some models claim there are no constraints at all. I haven’t seen a model which hallucinate only “child can’t stay with candy” constraint and respond correctly.
Sonne 4.6, one of the best models out there claims that “child can stay alone with candy because children can’t eat candy”. When I pointed out that that’s dumb it introduced this constraint and replied with:
Image
That’s one of the best models out there…
otto@programming.dev 21 hours ago
You can easily use the link openrouter.ai/chat?models=anthropic%2Fclaude-opus… to ask all flagship models this question in parallel. Personally I would definitely not leave my children alone with a priest (they might try to convert them), but if your constraint is only baby+candy, then in my test Gemini, GLM, Qwen and Kimi made that, and only that, assumption.
ExLisper@lemmy.curiana.net 21 hours ago
In my opinion the proper solution is to ask for the constraints. Similar to the “walk or drive to the car wash” problem LLMs still tend to get confused but a familiar format and don’t notice this problem doesn’t make sense. You can actually pay around with different examples to see how crazy the problem has to get form an LLM to refuse to answer and what biases or constraints does it have. Even if they assume some constraints they fail to solve this puzzle surprisingly often (like I showed for Sonnet 4.6 in other comment).
otto@programming.dev 21 hours ago
Image
MagicShel@lemmy.zip 1 day ago
I have to admit, this is more entertaining than counting 'r’s in strawberry. Novel logic puzzles really are about impossible because there is no “logic” input in token selection.
That being said, the first thing that came to my mind is that at some point the (presumable) adults, me and the priest, are going to be on the boat at some point, which would necessarily leave the baby alone on one shore or another.
Clearly, the only viable solution is the baby eats the candy, and then the priest eats the baby.