Comment on Sam Altman Says If Jobs Gets Wiped Out, Maybe They Weren’t Even “Real Work” to Start With
sugar_in_your_tea@sh.itjust.works 1 week agoThe solution to the public misusing technical terms isn’t to change the technical terms, but to educate the public. All of the following fall under AI:
- pathing algorithms of computer opponents, but probably not the decisions that computer opponents make (i.e. who to attack; that’s usually based on manually specified logic)
- the speech to text your phone used before Gemeni or whatever it’s called now on Android (Gemeni is also AI, just a different type of AI)
- home camera systems that can detect people vs animals, and sometimes classify those animals by species
- DDOS protection systems and load balancers for websites probably use some type of AI
AI is a broad field, and you probably interact with non-LLM variants every day, whether you notice or not. Here’s a Wikipedia article that goes through a lot of it. LLMs/GPT are merely one small subfield in the larger field of AI.
MangoCats@feddit.it 1 week ago
The problem with AI in a “popular context” is that it has been a forever moving target. Old mechanical adding machines were better at correctly summing columns of numbers than humans, at the time they were considered a limited sort of artificial intelligence. All along the spectrum it continues. 5 years ago, image classifiers that can sit and watch video feeds 24-7, accurately identifying things that happen in the feed with better than human accuracy (accounting for human lack of attention, coffee breaks, distracting phone calls, etc.) those were amazing feats of AI - at the time, and now they’re “just image classifiers” much as Alpha-Zero “just plays games.”
sugar_in_your_tea@sh.itjust.works 1 week ago
The first was never “AI” in a CS context, and the second has always and will always be “AI” in a CS context. The definition has been pretty consistent since at least Alan Turing, if not earlier.
I don’t know how to square that circle. To me it’s pretty simple, a solution or approach is AI if it simulates (or creates) intelligence, and an intelligent system is one that uses data (learns) from its environment to achieve its goals. Anything from an A* pathiing algorithm to actual general AI are “AI,” yet people assume the most sophisticated end of the spectrum.
MangoCats@feddit.it 1 week ago
Mostly because CS didn’t start talking about AI until after popular perception had pushed calculators into the “dumb automatons” category.
Image classifiers came after CS drew the “magic” line for what qualifies as AI, so CS has piles of academic literature talking about artificially intelligent image classification, but public perception moves on.
I think Turing already had adding machines before he developed his “test.”
The current round of LLMs seem more than capable of passing the Turing test if they are configured to try to. In the 1980s, the Eliza chat program could pass the Turing test for three or four exchanges with most people. These past weeks, I have had extended technical conversations with LLMs and they exhibit sustained “average” knowledge of our topics of discussion. Not the brightest bulb on the tree, but they’re widely read and can pretty much keep up with the average bear on the internet in terms of repeating what others have written.
Meanwhile, there’s a virulent public perception backlash calling LLMs “dumb automatons.” Personally, I don’t care what the classification is. “AI” has been “5 years away from realization” my whole life, and I’ve worked with “near AI” tech all that time. The current round of tools have made an impressive leap in usefulness. Bob Cratchit would have said the same about an adding machine if Scrooge had given him one.