Comment on Public trust in AI is sinking across the board
TrickDacy@lemmy.world 10 months agobasically just a spell-checker on steroids.
I cannot process this idea of downplaying this technology like this. It does not matter that it’s not true intelligence. And why would it?
If it is convincing to most people that information was learned and repeated, that’s smarter than like half of all currently living humans. And it is convincing.
nyan@lemmy.cafe 10 months ago
Some people found ELIZA convincing, but I don’t think anyone would claim it was true AI. Turing Test notwithstanding, I don’t think “convincing people who want to be convinced” should be the minimum test for artificial intelligence. It’s just a categorization glitch.
TrickDacy@lemmy.world 10 months ago
Maybe I’m not stating my point explicitly enough but it actually is that names or goalposts aren’t very important. Cultural impact is. I think already the current AI has had a lot more impact than any chatbot from the 60s and we can only expect that to increase. This tech has rendered the turing test obsolete, which kind of speaks volumes.
nyan@lemmy.cafe 10 months ago
Calling a cat a dog won’t make her start jumping into ponds to fetch sticks for you. And calling a glorified autocomplete “intelligence” (artificial or otherwise) doesn’t make it smart.
Problem is, words have meanings. Well, they do to actual humans, anyway. And associating the word “intelligence” with these stochastic parrots will encourage nontechnical people to believe LLMs actually are intelligent. That’s dangerous—potentially life-threatening. Downplaying the technology is an attempt to prevent this mindset from taking hold. It’s about as effective as bailing the ocean with a teaspoon, yes, but some of us see even that is better than doing nothing.
TrickDacy@lemmy.world 10 months ago