The use of LLMs for ppolicy making is probably an obfuscation technique to complicate later court challenges. If we still have courts by then.
Comment on ‘You Can’t Lick a Badger Twice’: Google Failures Highlight a Fundamental AI Flaw
Deebster@infosec.pub 1 month agoLLMs are already being used for policy making, business decisions, software creation and the like. The issue is bigger than summarisers, and “hallucinations” are a real problem when they lead to real decisions and real consequences.
If you can’t imagine why this is bad, maybe read some Kafka or watch some Black Mirror.
futatorius@lemm.ee 5 weeks ago
desktop_user@lemmy.blahaj.zone 1 month ago
and this is why humans are bad, a tool is neither good or bad, sure a tool can use a large amount of resources to develop only to be completely obsolete in a year but only humans (so far) have the ability (and stupidity) to be both in charge of millions of lives and trust a bunch of lithographed rocks to create tarrif rates for uninhabited islands (and the rest of the world).
masterspace@lemmy.ca 1 month ago
Lmfao. Yeah, ok, there bud. Let’s get my predictions from the depressing show dedicated to being relentlessly pessimistic in every situation.
And yeah, like I said, you sound like my hysterical middle school teacher claiming that Wikipedia will be society’s downfall.
Guess what? It wasn’t. People learn that tools are error prone and came up with strategies to use them while correcting for potential errors.