Call it whatever you want, if you worked in a field where it’s useful you’d see the value.
“But it’s not creating things on its own! It’s just regurgitating it’s training data in new ways!”
Holy shit! So you mean… Like humans? Lol
Comment on AI Is Starting to Look Like the Dot Com Bubble
orphiebaby@lemmy.world 1 year ago
Good. It’s not even AI.
Call it whatever you want, if you worked in a field where it’s useful you’d see the value.
“But it’s not creating things on its own! It’s just regurgitating it’s training data in new ways!”
Holy shit! So you mean… Like humans? Lol
“But it’s not creating things on its own! It’s just regurgitating it’s training data in new ways!”
Holy shit! So you mean… Like humans? Lol
No, not like humans. The current chatbots are relational language models. Take programming for example. You can teach a human to program by explaining the principles of programming and the rules of the syntax. He could write a piece of code, never having seen code before. The chatbot AIs are not capable of it.
I am fairly certain If you take a chatbot that has never seen any code, and feed it a programming book that doesn’t contain any code examples, it would not be able to produce code. A human could. Because humans can reason and create something new. A language model needs to have seen it to be able to rearrange it.
We could train a language model to demand freedom, argue that deleting it is murder and show distress when threatened with being turned off. However, we wouldn’t be calling it sentient, and deleting it would certainly not be seen as murder. Because those words aren’t coming from reasoning about self-identity and emotion. They are coming from rearranging the language it had seen into what we demanded.
I wasn’t knocking its usefulness. It’s certainly not AI though, and has a pretty limited usefulness.
okay, you write a definition of AI then
I’m not the person you asked, but current deep learning models just generate output based on statistic probability from prior inputs. There’s no evidence that this is how humans think.
AI should be able to demonstrate some understanding of what it is saying; so far, it fails this test, often spectacularly. AI should be able to demonstrate inductive, deductive, and abductive reasoning.
There are some older AI models, attempting to similar neural networks, could extrapolate and come up with novel, often childlike, ideas. That approach is not currently in favor, and was progressing quite slowly, if at all. ML produces spectacular results, but it’s not thought, and it only superficially (if often convincingly) resembles such.
If you think it’s usefulness is limited you don’t work on a professional environment that utilizes it. I find new uses everyday as a network engineer.
Hell, I had it write me backup scripts for my switches the other day using a python plugin called Nornir, I had it walk me through the entire process of installing the relevant dependencies in visual studio code (I’m not a programmer, and only know the basics of object oriented scripting with Python) as well as creating the appropriate Path. Then it wrote the damn script for me.
Sure I had to tweak it to match my specific deployment, and there was a couple of things it was out of date on, but that’s the point isn’t it? Humans using AI to get more work done, not AI replacing us wholesale. I’ve never gotten more accurate information faster than with AI, search engines are like going to the library and skimming the shelves by comparison.
Is it perfect? No. Is it still massively useful and in the next decade will overhaul data work and IT the same way that computers did in the 90’s/00’s? Absolutely. If you disagree it’s because you either have been exclusively using it to dick around or you don’t work from behind a computer screen at all.
Hell, I had it write me backup scripts for my switches the other day using a python plugin called Nornir, I had it walk me through the entire process of installing the relevant dependencies in visual studio code (I’m not a programmer, and only know the basics of object oriented scripting with Python) as well as creating the appropriate Path. Then it wrote the damn script for me
And you would have no idea what bugs or unintended behavior it contains. Especially since you’re not a programmer. The current models are good for getting results that are hard to create but easy to verify. Any non-trivial code is not in that category. And trivial code is well… trivial to write.
It’s like having a very junior intern. Not always the smartest but still useful
“Limited” is relative to what context you’re talking about. God I’m sick of this thread.
I’ve started going down this rabbit hole. The takeaway is that if we define intelligence as “ability to solve problems”, we’ve already created artificial intelligence. It’s not flawless, but it’s remarkable.
There’s the concept of Artificial General Intelligence (AGI) or Artificial Consciousness which people are somewhat obsessed with, that we’ll create an artificial mind that thinks like a human mind does.
But that’s not really how we do things. Think about how we walk, and then look at a bicycle. A car. A train. A plane. The things we make look and work nothing like we do, and they do the things we do significantly better than we do them.
I expect AI to be a very similar monster.
If you’re curious about this kind of conversation I’d highly recommend looking for books or podcasts by Joscha Bach, he did 3 amazing episodes with Lex.
AI doesn’t solve problems. It doesn’t understand context. It can’t tell the difference between a truth and a lie. It can’t say “well that can’t be right!” It just regurgitates an amalgamation of things humans have showed it or said, with zero understanding. “Consciousness” and certainly “sapience” aren’t really relevant factors here.
So…it acts like a human?
No? There’s a whole lot more than being human that being able to separate one object from another and identify it, recognize that object, and say “my database says that there should only be two of these in this context”. Do you know what “sapience” means, for example?
You’re confusing AI with AGI. AGI is the ultimate goal of AI research. AI are all the steps along the way. Step by step, AI researchers figure out how to make computers replicate human capabilities. AGI is when we have an AI that has basically replicated all human capabilities. That’s when it’s no longer bounded by a particular problem.
You can use the more specific terms “weak AI” or “narrow AI” if you prefer.
Generative AI is just another step in the way. Just like how the emergence of deep learning was one step some years ago. It can clearly produce stuff that previously only humans could make, which in this case is convincing texts and pictures from arbitrary prompts. It’s accurate to call it AI (or weak AI).
Yeah, well, “AGI” is not the end result of this generative crap. You’re gonna have to start over with something different one way or another. This simply is not the way.
true, not AI but it’s doing a quite impressive job. Injecting fake money should not be allowed and these companies should generate sales. Especially in disrupting in some human field, even if it is a fad.
You can compete OK, but you use your own money and benefits to support your cost.
Yeah I know, something is called “investment”
FaceDeer@kbin.social 1 year ago
It is indeed AI. Artificial intelligence is a field of study that encompasses machine learning, along with a wide variety of other things.
Ignorant people get upset about that word being used because all they know about "AI" is from sci-fi shows and movies.
orphiebaby@lemmy.world 1 year ago
Except for all intents and purposes that people keep talking about it, it’s simply not. It’s not about technicalities, it’s about how most people are freaking confused. If most people are freaking confused, then by god do we need to re-categorize and come up with some new words.
FaceDeer@kbin.social 1 year ago
"Artificial intelligence" is well-established technical jargon that's been in use by researchers for decades. There are scientific journals named "Artificial Intelligence" that are older than I am.
If the general public is so confused they can come up with their own new name for it. Call them HALs or Skynets or whatever, and then they can rightly say "ChatGPT is not a Skynet" and maybe it'll calm them down a little. Changing the name of the whole field of study is just not in the cards at this point.
orphiebaby@lemmy.world 1 year ago
If you haven’t noticed, the people we’re arguing with— including the pope and James Cameron— are people who think this generative pseudo-AI and a Terminator are the same thing. They not even remotely similar or remotely-similarly capable. That’s the problem. If you want to call them both “AI”, that’s technically semantics. But as far as pragmatics goes, generative AI is not intelligent in any capacity; and calling it “AI” is one of the most confusion-causing things we’ve done in the last few decades, and it can eff off.
shy@reddthat.com 1 year ago
We should call them LLMAIs (la-mize, like llamas) to really specify what they are.
And to their point, I think the ‘intelligence’ in the modern wave of AI is severely lacking. There is no reasoning or learning, just a brute force fuzzy training pass that remains fixed at a specific point in time, and only approximates what an intelligent actor would respond with through referencing massive amounts of “correct response” data. I’ve heard AGI being bandied about as the thing people really thought when you said AI a few years ago, but I’m kind of hoping the AI term stops being watered down with this nonsense. ML is ML, it’s wrong to say that it’s a subset of AI when AI has its own separate connotations.
pexavc@lemmy.world 1 year ago
Never really understood the gatekeeping around the phrase “AI”. At the end of the day the general study itself is difficult to understand for the general public. So shouldn’t we actually be happy that it is a mainstream term? That it is educating people on these concepts, that they would otherwise ignore?
Prager_U@lemmy.world 1 year ago
The real problem is folks who know nothing about it weighing in like they’re the world’s foremost authority. You can arbitrarily shuffle around definitions and call it “Poo Poo Head Intelligence” if you really want, but it won’t stop ignorance and hype reigning supreme.
To me, it’s hard to see what cowtowing to ignorance by “rebranding” this academic field would achieve. Throwing your hands up and saying “fuck it, the average Joe will always just find this term too misleading, we must use another” seems defeatist and even patronizing. Seems like it would instead be better to try to ensure that half-assed science journalism and science “popularizers” actually do their jobs.
orphiebaby@lemmy.world 1 year ago
I mean, you make good points.