If LLMs can’t do whatever you tell them based purely on natural language instructions then they need to stop advertising it that way.
It’s not just advertisement that’s the problem, do any of them even have user manuals? How is a user with no experience prompting LLMs (which was everyone 3 years ago) supposed to learn how to formulate a “correct” prompt without any instructions? It’s a smokescreen for blaming any bad output on the user.
Oh, it told you to put glue in your pizza? You didn’t prompt it right. It gives you explicit instructions on how to kill yourself because you talked about being suicidal? You prompted it wrong. It completely makes up new medical anatomical terminology? You have once again prompted it wrong! (Don’t make me dig up links to all those news stories)
It’s funny the fediverse tends to come down so hard on the side of ‘RTFM’ with anything Linux related, but with LLMs it’s actually the user’s fault for believing they weren’t being sold a fraudulent product without a user manual.
wischi@programming.dev 4 months ago
Nobody claimed that any sewing machine has PhD level intelligence in almost all topics.
LLMs are marketed as “replaces jobs”, “PhD level intelligence”, “Reasoning models”, “Deep think”.
And yet all that “PhD level intelligence” consistently gets the simplest things wrong.
But, prove me wrong. Pick a game, prompt any LLM you like and share it here (the whole conversation not only a code snippet)