“Trust in AI” is layperson for “believe the promise that the technology is as capable as it is”. This has nothing to do with stupidity or nefariousness.
Comment on Public trust in AI is sinking across the board
Sterile_Technique@lemmy.world 8 months ago
I mean, the thing we call “AI” now-a-days is basically just a spell-checker on steroids. There’s nothing to really to trust or distrust about the tool specifically. It can be used in stupid or nefarious ways, but so can anything else.
SkyNTP@lemmy.ml 8 months ago
FaceDeer@fedia.io 8 months ago
It's "believe the technology is as capable as we imagined it was promised to be."
The experts never promised Star Trek AI.
FarceOfWill@infosec.pub 8 months ago
They did promise skynet ai though. They’ve misrepresented it a great deal
TrickDacy@lemmy.world 8 months ago
basically just a spell-checker on steroids.
I cannot process this idea of downplaying this technology like this. It does not matter that it’s not true intelligence. And why would it?
If it is convincing to most people that information was learned and repeated, that’s smarter than like half of all currently living humans. And it is convincing.
nyan@lemmy.cafe 8 months ago
Some people found ELIZA convincing, but I don’t think anyone would claim it was true AI. Turing Test notwithstanding, I don’t think “convincing people who want to be convinced” should be the minimum test for artificial intelligence. It’s just a categorization glitch.
TrickDacy@lemmy.world 8 months ago
Maybe I’m not stating my point explicitly enough but it actually is that names or goalposts aren’t very important. Cultural impact is. I think already the current AI has had a lot more impact than any chatbot from the 60s and we can only expect that to increase. This tech has rendered the turing test obsolete, which kind of speaks volumes.
nyan@lemmy.cafe 8 months ago
Calling a cat a dog won’t make her start jumping into ponds to fetch sticks for you. And calling a glorified autocomplete “intelligence” (artificial or otherwise) doesn’t make it smart.
Problem is, words have meanings. Well, they do to actual humans, anyway. And associating the word “intelligence” with these stochastic parrots will encourage nontechnical people to believe LLMs actually are intelligent. That’s dangerous—potentially life-threatening. Downplaying the technology is an attempt to prevent this mindset from taking hold. It’s about as effective as bailing the ocean with a teaspoon, yes, but some of us see even that is better than doing nothing.
EldritchFeminity@lemmy.blahaj.zone 8 months ago
I would argue that there’s plenty to distrust about it, because its accuracy leaves much to be desired (to the point where it completely makes things up fairly regularly) and because it is inherently vulnerable to biases due to the data fed to it.
Early facial recognition tech had trouble identifying between different faces of black people, people below a certain age, and women, and nobody could figure out why. Until they stepped back and took a look at the demographics of the employees of these companies. They were mostly middle-aged and older white men, and those were the people whose faces they used as the data sets for the in-house development/testing of the tech. We’ve already seen similar biases in image prompt generators where they show a preference for thin white women as being considered what an attractive woman is.
Plus, there’s the data degradation issue. Supposedly, ChatGPT isn’t fed any data from the internet at large past 2021 because the amount of AI generated content past then causes a self perpuating decline in quality.
PoliticallyIncorrect@lemmy.world 8 months ago
ThE aI wIlL AttAcK ThE HimaNs!! sKynEt!!
SlopppyEngineer@lemmy.world 8 months ago
AI is just a very generic term and always has been. It’s like saying “transportation equipment” which can be anything from roller skates to the space shuttle". Even the old checkers programs were describes as AI in the fifties.
Of course a vague term is a marketeer’s dream to exploit.
At least with self driving cars you have levels of autonomy.
Feathercrown@lemmy.world 8 months ago
Before chatgpt was revealed, this is under the unbrella of what AI meant. Don’t change the terms just because you want them to mean something else.
FarceOfWill@infosec.pub 8 months ago
There’s a long glorious history of things being AI until computers can do them, and then the research area is renamed to something specific to describe the limits of it.
reflectedodds@lemmy.world 8 months ago
Took a look and the article title is misleading. It says nothing about trust in the technology and only talks about not trusting companies collecting our data. So really nothing new.
Personally I want to use the tech more, but I get nervous that it’s going to bullshit me/tell me the wrong thing and I’ll believe it.