Comment on Public trust in AI is sinking across the board
TrickDacy@lemmy.world 10 months agoMaybe I’m not stating my point explicitly enough but it actually is that names or goalposts aren’t very important. Cultural impact is. I think already the current AI has had a lot more impact than any chatbot from the 60s and we can only expect that to increase. This tech has rendered the turing test obsolete, which kind of speaks volumes.
nyan@lemmy.cafe 9 months ago
Calling a cat a dog won’t make her start jumping into ponds to fetch sticks for you. And calling a glorified autocomplete “intelligence” (artificial or otherwise) doesn’t make it smart.
Problem is, words have meanings. Well, they do to actual humans, anyway. And associating the word “intelligence” with these stochastic parrots will encourage nontechnical people to believe LLMs actually are intelligent. That’s dangerous—potentially life-threatening. Downplaying the technology is an attempt to prevent this mindset from taking hold. It’s about as effective as bailing the ocean with a teaspoon, yes, but some of us see even that is better than doing nothing.
TrickDacy@lemmy.world 9 months ago
nyan@lemmy.cafe 9 months ago
How about taking advice on a medical matter from an LLM? Or asking the appropriate thing to do in a survival situation? Or even seemingly mundane questions like “is it safe to use this [brand name of new model of generator that isn’t in the LLM’s training data] indoors?” Wrong answers to those questions can kill. If a person thinks the LLM is intelligent, they’re more likely to take the bad advice at face value.
If you ask a human about something important that’s outside their area of competence, they’ll probably refer you to someone they think is knowledgeable. An LLM will happily make something up instead, because it doesn’t understand the stakes.
The chance of any given query to an LLM killing someone is, admittedly, extremely low, but a sufficiently large number of queries, it will happen sooner or later.
TrickDacy@lemmy.world 9 months ago
Krauerking@lemy.lol 9 months ago
Because one trained in a particular way could lead people to think it’s intelligent and also give incredibly biased data that confirms the bias of those listening.
It’s creating a digital prophet that is only rehashing the biased of the creator that makes it dangerous if it’s regarded as being above the flaws of us humans. People want something smarter than them to tell them what to do, and giving that designation to a flawed chatbot that simply predicts what’s the most coherent word sentence, through the word “intelligent”, is not safe or a good representation of what it actually is.