Acid freaks are probably more reliable than chat gpt
Comment on [deleted]
sxan@midwest.social 3 weeks ago
Seriously, do not use LLMs as a source of authority. They are stochistic machines predicting the next character they type; if what they say is true, it’s pure chance.
Use them to draft outlines. Use them to summarize meeting notes (and review the summaries). But do not trust them to give you reliable information. You may as well go to a party, find the person who’s taken the most acid, and ask them for an answer.
bizzle@lemmy.world 3 weeks ago
sxan@midwest.social 3 weeks ago
You’ll certainly gain some valuable insight, even if it has nothing to do with your question. Which is more than I can say for LLMs.
non_burglar@lemmy.world 3 weeks ago
I don’t understand the willingness to forgive error … Would you go to a person if you knew for a fact that 1 of 5 things they say is wrong?
Chewy7324@discuss.tchncs.de 3 weeks ago
If the person would answer almost instantly, 24/7, without being annoyed: Yes. Checking important information is easier once you know, what exactly to type.
SnotFlickerman@lemmy.blahaj.zone 3 weeks ago
I’d say those SMART attributes look pretty bad… Don’t need an LLM for that…
slazer2au@lemmy.world 3 weeks ago
I call them regurgitation machines prone to hallucinations.
sxan@midwest.social 3 weeks ago
That is a perfect description.
roofuskit@lemmy.world 3 weeks ago
Just a reminder that LLMS can only truncate text, they are incapable of summarization.
dream_weasel@sh.itjust.works 3 weeks ago
First sentence of each paragraph: correct.
Basically all the rest is bunk besides the fact that you can’t count on always getting reliable information. Right answers (especially for something that is technical but non-verifiable), wrong reasons.
There are “stochastic language models” I suppose (e.g., click the middle suggestion from your phone after typing the first word to create a message), but something like chatgpt or perplexity or deepseek are not that, beyond using tokenization / word2vect-like setups to make human readable text. These are a lot more like “don’t trust everything you read on Wikipedia” than a randomized acid drop response.
flightyhobler@lemmy.world 3 weeks ago
yeah, that’s why I’m here, dude.
BaroqueInMind@lemmy.one 3 weeks ago
So then, if you knew this, why did you bother to ask it first?
flightyhobler@lemmy.world 3 weeks ago
I doubted chatgpts input and I came here looking for help. What are you on about?
WarlockoftheWoods@lemy.lol 3 weeks ago
Dude, people here are such fucking cunts, you didn’t do anything wrong, ignore these 2 trogledytes who think they are semi intelligent. I’ve worked in IT nearly my whole life. I’d return it if you can.
Empricorn@feddit.nl 3 weeks ago
Defensive… If someone asks you for advice, and says they have doubts about the answer they received from a Magic 8-Ball, how would you feel?
traches@sh.itjust.works 3 weeks ago
Because it’s like a search box you can explain a problem to and get a bunch of words related to it without having to wade through blogspam, 10 year old Reddit posts, and snippy stackoverflow replies. You don’t have to post on discord and wait a day or two hoping someone will maybe come and help. Sure it is frequently wrong, but it’s often a good first step.
And no I’m not an AI bro at all, I frequently have coworkers dump AI slop in my inbox and ask me to take it seriously and I fucking hate it.
Mondez@lemdro.id 3 weeks ago
But once you have it’s output, unless you already know enough to judge if it’s correct or not you have to fall back to doing all those things you used the AI to avoid in order to verify what it told you.
non_burglar@lemmy.world 3 weeks ago
It is not a search box. It generates words we know are confidently wrong quite often.
“Asking” gpt is like asking a magic 8 ball; it’s fun, but it has zero meaning.