Comment on Might not be efficient, but at least it... Uhhh, wait, what good does it provide again?
wischi@programming.dev 21 hours agoTry to play tic tac toe against ChatGPT for example 🤣 (just ask for “let’s play ASCII tic tac toe”)
Practically loses every game against my 4yo child - if it even manages to play according to the rules.
AI: Trained on the entire internet using billions of dollars. 4yo: Just told her the rules of the game twice.
Currently the best LLMs are certainly very “knowledgeable” (as in, they “know” much more than I - or practically any person - do for most topics) but they are certainly far away from intelligence.
You should only use them of you are able to verify the correctness of the output yourself.
fonix232@fedia.io 18 hours ago
"See, no matter how much I'm trying to force this sewing machine to be a racecar, it just can't do it, it's a piece of shit"
Just because there are similarities, if you misuse LLMs, they won't perform well. You have to treat it as a tool, with a specific purpose. In case of LLMs that purpose is to take a bunch of input tokens, analyse them, and output the most likely output tokens that is statistically the "best response". The intelligence is putting that together, not "understanding tic tac toe". Mind you, you can tie in other ML frameworks for specific tasks that are better suited for those -e.g. you can hook up a chess engine (or tic tac toe engine), and that will beat you every single time.
Or an even better example... Instead of asking the LLM to play tic-tac-toe with you, ask it to write a Bash/Python/JavaScript tic-tac-toe game, and try playing against that. You'll be surprised.
wischi@programming.dev 1 hour ago
Nobody claimed that any sewing machine has PhD level intelligence in almost all topics.
LLMs are marketed as “replaces jobs”, “PhD level intelligence”, “Reasoning models”, “Deep think”.
And yet all that “PhD level intelligence” consistently gets the simplest things wrong.
But, prove me wrong. Pick a game, prompt any LLM you like and share it here (the whole conversation not only a code snippet)
Catoblepas@piefed.blahaj.zone 18 hours ago
If LLMs can’t do whatever you tell them based purely on natural language instructions then they need to stop advertising it that way.
It’s not just advertisement that’s the problem, do any of them even have user manuals? How is a user with no experience prompting LLMs (which was everyone 3 years ago) supposed to learn how to formulate a “correct” prompt without any instructions? It’s a smokescreen for blaming any bad output on the user.
Oh, it told you to put glue in your pizza? You didn’t prompt it right. It gives you explicit instructions on how to kill yourself because you talked about being suicidal? You prompted it wrong. It completely makes up new medical anatomical terminology? You have once again prompted it wrong! (Don’t make me dig up links to all those news stories)
It’s funny the fediverse tends to come down so hard on the side of ‘RTFM’ with anything Linux related, but with LLMs it’s actually the user’s fault for believing they weren’t being sold a fraudulent product without a user manual.
fonix232@fedia.io 15 hours ago
Sounds like you're the kind of person who needs the "don't put your fucking pets in the microwave" warnings.