My friends aren’t burning up the planet just to come up with that useless response though.
Comment on ‘You Can’t Lick a Badger Twice’: Google Failures Highlight a Fundamental AI Flaw
masterspace@lemmy.ca 5 days ago
Try this on your friends, make up an idiom, then walk up to them, say it without context, and then say “meaning?” and see how they respond.
Pretty sure most of mine will just make up a bullshit respond and go along with what I’m saying unless I give them more context.
TheBat@lemmy.world 4 days ago
masterspace@lemmy.ca 4 days ago
Yes, they literally are. Or maybe you haven’t heard of human caused climate change?
TheBat@lemmy.world 4 days ago
You dumb
TimewornTraveler@lemm.ee 4 days ago
it highlights the fact that these LLMs refuse to say “I don’t know”, which essentially means we cannot rely on them for any factual reporting.
masterspace@lemmy.ca 4 days ago
But a) they don’t refuse, most will tell you if you prompt them well them and b) you cannot rely on them as the sole source of truth but an information machine can still be useful if it’s right most of the time.
zarkanian@sh.itjust.works 4 days ago
So, you have friends who are as stupid as an AI. Got it. What’s your point?
sugar_in_your_tea@sh.itjust.works 4 days ago
Deebster@infosec.pub 5 days ago
My friends would probably say something like “I’ve never heard that one, but I guess it means something like …”
The problem is, these LLMs don’t give any indication when they’re making stuff up versus when repeating an incontrovertible truth. Lots of people don’t understand the limitations of things like Google’s AI summary* so they will trust these false answers. Harmless here, but often not.
* I’m not counting the little disclaimer because we’ve been taught to ignore smallprint from being faced with so much of it
masterspace@lemmy.ca 5 days ago
Lots of people would just say something and then figure out if it’s right.
Quite frankly, you sound like middle school teachers being hysterical about Wikipedia being wrong sometimes.
Deebster@infosec.pub 5 days ago
LLMs are already being used for policy making, business decisions, software creation and the like. The issue is bigger than summarisers, and “hallucinations” are a real problem when they lead to real decisions and real consequences.
If you can’t imagine why this is bad, maybe read some Kafka or watch some Black Mirror.
futatorius@lemm.ee 1 day ago
The use of LLMs for ppolicy making is probably an obfuscation technique to complicate later court challenges. If we still have courts by then.
masterspace@lemmy.ca 4 days ago
Lmfao. Yeah, ok, there bud. Let’s get my predictions from the depressing show dedicated to being relentlessly pessimistic in every situation.
And yeah, like I said, you sound like my hysterical middle school teacher claiming that Wikipedia will be society’s downfall.
Guess what? It wasn’t. People learn that tools are error prone and came up with strategies to use them while correcting for potential errors.
desktop_user@lemmy.blahaj.zone 4 days ago
and this is why humans are bad, a tool is neither good or bad, sure a tool can use a large amount of resources to develop only to be completely obsolete in a year but only humans (so far) have the ability (and stupidity) to be both in charge of millions of lives and trust a bunch of lithographed rocks to create tarrif rates for uninhabited islands (and the rest of the world).