Acid freaks are probably more reliable than chat gpt
Comment on [deleted]
sxan@midwest.social 1 month ago
Seriously, do not use LLMs as a source of authority. They are stochistic machines predicting the next character they type; if what they say is true, it’s pure chance.
Use them to draft outlines. Use them to summarize meeting notes (and review the summaries). But do not trust them to give you reliable information. You may as well go to a party, find the person who’s taken the most acid, and ask them for an answer.
bizzle@lemmy.world 1 month ago
sxan@midwest.social 1 month ago
You’ll certainly gain some valuable insight, even if it has nothing to do with your question. Which is more than I can say for LLMs.
non_burglar@lemmy.world 1 month ago
I don’t understand the willingness to forgive error … Would you go to a person if you knew for a fact that 1 of 5 things they say is wrong?
Chewy7324@discuss.tchncs.de 1 month ago
If the person would answer almost instantly, 24/7, without being annoyed: Yes. Checking important information is easier once you know, what exactly to type.
SnotFlickerman@lemmy.blahaj.zone 1 month ago
I’d say those SMART attributes look pretty bad… Don’t need an LLM for that…
slazer2au@lemmy.world 1 month ago
I call them regurgitation machines prone to hallucinations.
sxan@midwest.social 1 month ago
That is a perfect description.
roofuskit@lemmy.world 1 month ago
Just a reminder that LLMS can only truncate text, they are incapable of summarization.
dream_weasel@sh.itjust.works 1 month ago
First sentence of each paragraph: correct.
Basically all the rest is bunk besides the fact that you can’t count on always getting reliable information. Right answers (especially for something that is technical but non-verifiable), wrong reasons.
There are “stochastic language models” I suppose (e.g., click the middle suggestion from your phone after typing the first word to create a message), but something like chatgpt or perplexity or deepseek are not that, beyond using tokenization / word2vect-like setups to make human readable text. These are a lot more like “don’t trust everything you read on Wikipedia” than a randomized acid drop response.
flightyhobler@lemmy.world 1 month ago
yeah, that’s why I’m here, dude.
BaroqueInMind@lemmy.one 1 month ago
So then, if you knew this, why did you bother to ask it first?
flightyhobler@lemmy.world 1 month ago
I doubted chatgpts input and I came here looking for help. What are you on about?
WarlockoftheWoods@lemy.lol 1 month ago
Dude, people here are such fucking cunts, you didn’t do anything wrong, ignore these 2 trogledytes who think they are semi intelligent. I’ve worked in IT nearly my whole life. I’d return it if you can.
Empricorn@feddit.nl 1 month ago
Defensive… If someone asks you for advice, and says they have doubts about the answer they received from a Magic 8-Ball, how would you feel?
traches@sh.itjust.works 1 month ago
Because it’s like a search box you can explain a problem to and get a bunch of words related to it without having to wade through blogspam, 10 year old Reddit posts, and snippy stackoverflow replies. You don’t have to post on discord and wait a day or two hoping someone will maybe come and help. Sure it is frequently wrong, but it’s often a good first step.
And no I’m not an AI bro at all, I frequently have coworkers dump AI slop in my inbox and ask me to take it seriously and I fucking hate it.
Mondez@lemdro.id 1 month ago
But once you have it’s output, unless you already know enough to judge if it’s correct or not you have to fall back to doing all those things you used the AI to avoid in order to verify what it told you.
non_burglar@lemmy.world 1 month ago
It is not a search box. It generates words we know are confidently wrong quite often.
“Asking” gpt is like asking a magic 8 ball; it’s fun, but it has zero meaning.