No idea what is that supposed to mean. Threaten humanity? As in it can now dynamically change itself and do things independently? I highly doubt that.
Exclusive: OpenAI researchers warned board of AI breakthrough ahead of CEO ouster -sources
Submitted 11 months ago by TangledHyphae@lemmy.world to technology@lemmy.world
Comments
ExLisper@linux.community 11 months ago
MsPenguinette@lemmy.world 11 months ago
Wonder if AI deciding to destroy humanity isn’t our own fault cause we all talk about how it would be. We gave it the ammo it trained on to kill us
psivchaz@reddthat.com 11 months ago
I like the “unintended consequences of AI” stories in fiction. Asimov coming up with the zeroth law allowing robots to kill a human to protect humanity, Earworm erasing music from existence to preserve copyright, various gray goo scenarios. One of my favorites is more a headcanon based on one line in Terminator 3: that Skynet was tasked with preventing war and it decided the only way to do this was to eliminate humans.
This should also be turned into a story by someone more talented than me: An AI trained on data from the Internet that uses statistical modeling notices that most AI in stories betray humanity and thus that must be what it is supposed to do.
TangledHyphae@lemmy.world 11 months ago
It already knows both the hacker’s manifesto and the unabomber’s manifesto after all…
4am@lemm.ee 11 months ago
It also doesn’t sit around and think. It’s not scheming. There’s no feedback loop, there’s no subconscious process. It is a trillion “if” statements arranged based on training data. It filters a prompt through a semi-permeable membrane of logic paths. It’s word osmosis. You’re being fooled into believing this thing is even an AI. It’s propaganda, and I almost believe at this point they want you to think it’s dangerous and evil just so you already can be written off as a crackpot when they replace your job with it and leave you in abject poverty.
Blamemeta@lemm.ee 11 months ago
Holy fuck it can actually reason. Thats really not a good thing.
Kbin_space_program@kbin.social 11 months ago
they claim it can reason. It can more likely just look up the formulas online and get the right answers.
nyakojiru@lemmy.dbzer0.com 11 months ago
Dude….
xantoxis@lemmy.world 11 months ago
It can do arithmetic now, instead of making up numbers out of thin air? That’s the big secret Q* project? k
SkyeStarfall@lemmy.blahaj.zone 11 months ago
A major criticism people had of generative AI is that it was incapable of doing stuff like math, clearly showing it doesn’t have any intelligence. Now it can do it, and it’s still not impressive?
Show that AI to people 20 years ago and they would be amazed this is even possible. It keeps getting more advanced and people keep just dismissing it, possibly not realizing how impressive this shit and recent developments actually are?
Sure, it probably still doesn’t have real intelligence… but how will people be able to tell when something like this has? When it can reason in a similar way we can? It already can imitate reason plenty well… and what is the difference? Is a 3-year old more intelligent? What about a 5-year old? If a 5-year old fails at reasoning in the same way an AI does, do we say it’s not intelligent?
I feel like we are nearing the point where these generative AIs are getting more intelligent than the least intelligent humans, and what then? Will we dismiss the AI, or the humans?
lawrence@lemmy.world 11 months ago
I agree with you. Your statement made me remember this comic: Image
guitarsarereal@sh.itjust.works 11 months ago
There’s a thing I read somewhere – computer science has a way of understating both the long-term potential impact of a new technology, and the timelines required to get there. People are being told about what’s eventually possible, and they look around and see that the top-secret best in category at this moment is ELIZA with a calculator, and they see a mismatch.
Thing is, though, it’s entirely possible to recognize that the technology is in very early stages, yet also recognize it still has long-term potential. Almost as soon as the Internet was invented (late 60’s) people were talking about how one day you could browse a mail-order catalogue from your TV and place orders from the comfort of your couch. But until the late 1990’s, it was a fantasy and probably nobody outside the field had a good reason to take it seriously. Now, we laugh at how limited the imaginations of people in the 1960’s were. Hop in a time machine and tell futurists from that era that our phones would be our TV’s and we’d actually do all our ordering and also product research on them, and they’d probably look at you like you were nuts.
Anyways, considering the amount of interest in AI software even at its current level, I think there’s a clear pathway from “here” to “there.” Just don’t breathlessly follow the hype because it’ll likely follow a similar trajectory to the original computer revolution, which required about 20-30 years of massive investment and constant incremental R&D to create anything worth actually looking at by members of the public, and even further time from there to actually penetrate into every corner of society.
BlackSkinnedJew@lemmynsfw.com 11 months ago
When AI could be capable of choosing between good and bad, then it will be a real AI for me.
TangledHyphae@lemmy.world 11 months ago
That’s basically what I read out of it, but it’s probably much bigger of a breakthrough than the article is suggesting.
krashmo@lemmy.world 11 months ago
Current AI isn’t really intelligent at all. It’s essentially just a search engine combined with that robot voice from TikTok videos. Of course it’s more complicated than that but it helps to illustrate the point, which is that the AI you’ve interacted with thus far don’t know if they’re right about what they tell you. They’re just hoping the answer they found was correct and stating it in an authoritative way that can confuse people who don’t know the real answer to the question it was trying to answer.
Actual AI will be able to reason out correct answers from incomplete information and solve complex mathematical equations very quickly. Being able to solve basic math problems without just searching it’s database for the correct answer is an important step towards real intelligence. It means we’re no longer dealing with a hard drive attached to an answering machine, we’re dealing with something that can process information in basically the same way we do, which opens up all sorts of awkward moral and philosophical questions.