James Cameron on AI: “I warned you guys in 1984 and you didn’t listen”::undefined
I dunno, James. Pretty sure Isaac Asimov had more clear warnings years prior to Terminator.
Submitted 1 year ago by L4s@lemmy.world [bot] to technology@lemmy.world
James Cameron on AI: “I warned you guys in 1984 and you didn’t listen”::undefined
I dunno, James. Pretty sure Isaac Asimov had more clear warnings years prior to Terminator.
Maybe harlan ellison too
And we were warned about Perceptron in the 1950s. Fact of the matter is, this shit is still just a parlor trick and doesn’t count as “intelligence” in any classical sense whatsoever. Guessing the next word in a sentence because hundreds of millions of examples tell it to isn’t really that amazing. Call me when any of these systems actually comprehend the prompts they’re given.
EXACTLY THIS. it’s a really good parrot and anybody who thinks they can fire all their human staff and replace with ChatGPT is in for a world of hurt.
Not if most their staff were pretty shitty parrots and the job is essentially just parroting…
Guessing the next word in a sentence because hundreds of millions of examples tell it to isn’t really that amazing.
The best and most concise explanation (and critique) of LLMs in the known universe.
What a pompous statement. Stories of AI causing trouble like this predate him by decades. He’s never told an original story, they’re all heavily based on old sci-fi stories.
First he warned us about ai and nobody listened then he warned the submarine guy and he didn’t listen. We have to listen to him about the giant blue hippy aliens or we’ll all pay.
No one has told an “original” story.
It’s a self indulgent and totally assinine remark but at least he’s saying something.
One minute the Internet simps for this guy; the next, he’s a hack.
if one person thinks one thing and another person thinks a different thing, that doesn’t make them both hypocrites even if they are both on the internet
Plenty of people have told “original” stories.
My remark was self indulgent and totally as[s]inine, but I’m just saying something too, where’s my pass?
The internet doesn’t act as a single cohesive entity.
In fact, what is happening now sounds a lot more like Colossus: The Forbin Project (came out in 1970) than The Terminator.
So is the new trend going to be people who mentioned AI in the past to start acting like they were Nostradamus when warnings of evil AIs gone rogue has been a trope for a long long time?
I’m sick of hearing from James Cameron. This dude needs to go away. He doesn’t know a damn thing about LLMs. It’s ridiculous how many articles have been written about random celebs’ opinions on AI when none of them know shit about it.
He should stick to making shitty Avatar movies and oversharing submarine implosion details with the news media
IIRC the original idea for the Terminator was for it to have the appearance of a regular guy on the street, the horror arising from the fact that anyone around you could actually be an emotionless killer.
They ended up getting a 6 foot Austrian behemoth that could barely speak english.
An evil 1984 Arnold Schwarzenegger with guns would be terrifying AF even if it wasn’t an AI robot from the future.
Lance Henriksen who ended up playing a cop in The Terminator was originally cast as the Terminator. But then Arnold was brought in and the rest is history.
Maybe as consolation, Cameron went on to cast Lance as the rather helpful android in Aliens.
This is really turning out like the ‘satanic panic’ of the 80’s all over again.
The difference being that there was never much proof for the Satanic panic and that now we have actual robot cop dogs patrolling streets.
I warned everyone about James Cameron in 1983 and no one listened
Here's the thing. The Terminator movies were a warning against government/army AI. Actually slightly before that I guess wargames was too. But, honestly I'm not worried about military AI taking over.
I think if the military setup an AI, they would have multiple ways to kill it off in seconds. I mean, they would be in a more dangerous position to have an AI "gone wild". But not because of the movies, but because of how they work, they would have a lot of systems in place to mitigate disaster. Is it possible to go wrong? Yes. Likely? No.
I'm far more worried about the geeky kid that now has access to open source AI that can be retasked. Someone that doesn't understand the consequences of their actions fully, or at least can't properly quantify the risks they're taking. But, is smart enough to make use of these tools to their own end.
Some of you might still be teenagers, but those that aren't, remember back. Wouldn't you potentially think it'd be cool to create an auto gpt or some form of adversarial AI with an open ended success criteria that are either implicitly dangerous and/or illegal, or are broad enough to mean the AI will see the easiest path to success is to do dangerous and/or illegal things to reach its goal. You know, for fun. Just to see if it would work.
I'm not convinced the AI is quite there yet to be dangerous, or maybe it is. I've honestly not kept close tabs on this. But, when it does reach that level of maturity, a lot of the tools are still open source, they can be modified, any protections removed "for the lols" or "just to see what it can do" and someone without the level of control a government/military entity has could easily lose control of their own AI. That's what scares me, not a Joshua or Skynet.
The biggest risk of AI at the moment is the same posed by the Industrial Revolution: Many professions will become obsolete, and it might be used as leverage to impose worse living conditions over those who still have jobs.
The army loves a chain of command. I don’t see this changing with AI. The army just putting AI in the commander’s seat and letting it roll just doesn’t sound credible to me.
James Cameron WOULD make this about James Cameron.
Well duh, it’s James Cameron
ITT: People describing the core component of human consciousness, pattern recognition, as not a big deal because it’s code and not a brain. That’s fine, stick your heads in the sand. Rather than trying to shape the inevitability in a positive fashion for humanity, just leave it to the corpos to take care of it for you 👍
So all you do is create phrases based on things you’ve read in the past and recognizing similar interactions between other people and recreating them? 🤔
I feel like I do sometimes. Like, to the point where I legit thought I might be a sociopath.
Turns out it’s just ADHD and autism.
No we also transfer generic material to similar looking (but not too similar looking) people and then teach those new people the pattern matching.
My point: Reductionism just isn’t useful when discussing intelligence.
As opposed to what, exactly?
My thoughts exactly
The technology is definitely impressive, but some people are jumping the gun by assuming more human-like characteristics in AI than it actually has. It’s not actually able to understand the concepts behind the patterns that it matches.
AI personhood is only selectively used as an argument to justify how their creators feed copyrighted work into it, but even they treat it as a tool, not like something that could potentially achieve consciousness.
I’m not afraid of Ai, I’m afraid of greedy capitalist mfs who owns the AI.
The real question is how much time do we have before a Roomba goes goes back in time to kill mother of someone who was littering to much?
scene: a scrap yard, full of torn-up cars
in a flash, a square-looking and muscular man appears
he walks into a bar, and when confronted by an angry biker he punches him in the face and steals his clothes & aviator sunglasses
scene: the square-looking man walks into an office and confronts a scared-looking secretary
Square man: I am heyah to write fuhst drafts of movie scripts and make concept aht!
Did anyone watch “Unknown Killer Roboter on netflix”? The AI-jet-pilot and the drone swarms scared the hell out of me. theguardian.com/…/unknown-killer-robots-review-th…
orphiebaby@lemmy.world 1 year ago
It’s getting old telling people this, but… the AI that we have? Isn’t even really AI. It’s certainly not anything like in the movies. It’s just pattern-recognition algorithms. It doesn’t know or understand anything and it has no context. It can’t tell the difference between truth or a lie, and it doesn’t know what a finger is— it just paints amalgamations of things it’s already seen.
I’m not saying there’s nothing to be afraid of concerning today’s “AI”, but it’s not comparable to movie/book AI.
adeoxymus@lemmy.world 1 year ago
That type of reductionism isn’t really helpful. You can describe the human brain to also just be pattern recognition algorithms. But doing that many times, at different levels, apparently gets you functional brains.
wizardbeard@lemmy.dbzer0.com 1 year ago
But his statement isn’t reductionism.
raltoid@lemmy.world 1 year ago
Not at all.
They just don’t like being told they’re wrong and will attack you instead of learning something.
Immersive_Matthew@sh.itjust.works 1 year ago
I really think the only thing to be concerned of is human bad actors with AI and not AI. AI alignment will be significantly easier than human alignment as we are for sure not aligned and it is not even our nature to be aligned.
PopShark@lemmy.world 1 year ago
I’ve had this same thought for decades now ever since I first heard of ai takeover scifi stuff as a kid. Bots just preform set functions. People in control of bots can create mayhem.
jeffw@lemmy.world 1 year ago
Strong AI vs weak AI.
We’re a far cry from real AI
Homo_Stupidus@lemmy.world 1 year ago
Isn’t that also referred to as Virtual Intelligence vs Artificial Intelligence? What we have now I’d just very well trained VI. It’s not AI because it only outputs variations of what’s it been trained using algorithms, right? Actual AI would be capable of generating information entirely distinct from any inputs.
pelespirit@sh.itjust.works 1 year ago
I just listened to 2 different takes on AI by true experts and it’s way more than what you’re saying. If the AI doesn’t have good goals programmed in, we’re fucked.It’s also being controlled by huge corporations that decide what those goals are. Judging from the past, this is not good.
orphiebaby@lemmy.world 1 year ago
You seem to have completely missed the point of my post.
MrMonkey@lemm.ee 1 year ago
When they built a new building at my college they decided to to use “AI” (back when SunOS ruled the world) to determine the most efficient route for the elevator to take.
The parameter they gave it to measure was “how long does each wait to get to their floor”. So it optimized for that and found it could get it down to 0 by never letting anyone get on, so they never got to their floor, so their wait time was unset (which = 0).
They tweaked the parameters to ensure everyone got to their floor and as far as I can tell it worked well. I never had to wait much for an elevator.
Spaniard@lemmy.world 1 year ago
An AI can’t be controlled by corporations, an AI will control corporations.
PotjiePig@lemmy.world 1 year ago
Mate, a bad actor could put today’s LLM, face recognition softwares and functionality into an armed drone, show it a picture of Sara Connor and tell it to go hunting and it would be able to handle the rest. We are just about there. Call it what you want.
orphiebaby@lemmy.world 1 year ago
That sure sounds nice in your head.
ours@lemmy.film 1 year ago
LLM stands for Large Language Model. I don’t see how a model to process text is going to match faces out in the field. And either that drone is flying chest-hight, it better recognize people’s hair patterns (balding Sarah Connors beware or wear hats!).
Synchrome@lemmy.world 1 year ago
orphiebaby@lemmy.world 1 year ago
Not much, because it turns out there’s more to AI than a hypothetical sum of what we already created.
stooovie@lemmy.world 1 year ago
True but that doesn’t keep it from screwing a lot of things up.
orphiebaby@lemmy.world 1 year ago
Einstein@lemmy.world 1 year ago
Sounds like you described a baby.
orphiebaby@lemmy.world 1 year ago
Yeah, I think there’s a little bit more to consciousness and learning than that. Today’s AI doesn’t even recognize objects, it just paints patterns.
terminhell@lemmy.world 1 year ago
GAI - General Artificial Intelligence is what most people jump too. And, for those wondering, that’s the beginning of the end game type. That’s the kind that will understand context. The ability to ‘think’ on its own with little to no input from humans. What we have now is basically autocorrect on super steroids.
ButtholeAnnihilator@lemmy.world 1 year ago
Regardless of if its true AI or not (I understand its just machine learning) Cameron’s sentiment is still mostly true. The Terminator in the original film wasn’t some digital being with true intelligence, it was just a machine designed with a single goal. There was no reasoning or planning really, just an algorithm that said "get weapons, kill Sarah Connor. It wasn’t far off from an Boston Dynamics robot using machine learning to complete a task.
orphiebaby@lemmy.world 1 year ago
You don’t understand. Our current AI? Doesn’t know the difference between an object and a painting. Furthermore, everything it perceives is “normal and true”. You give it bad data and suddenly it’s broken. And “giving it bad data” is way easier than it sounds. A “functioning” AI (like a Terminator) requires the ability to “understand” and scrutinize, not just copy what others tell it and combine results.