I mean is this stuff even really AI? It has no awareness of what it’s saying. It’s simply calculating the most probable next work in a typical sentence and spewing it out. I’m not sure this is the tech that will decide humanity is unnecessary
"I lost trust": Why the OpenAI team in charge of safeguarding humanity imploded
Submitted 5 months ago by misk@sopuli.xyz to technology@lemmy.world
Comments
Veedem@lemmy.world 5 months ago
kromem@lemmy.world 5 months ago
It has no awareness of what it’s saying. It’s simply calculating the most probable next word in a typical sentence and spewing it out.
Neither of these things are true.
It does create world models (see the Othello-GPT papers, Chess-GPT replication, and the Max Tegmark world model papers).
And while it is trained on predicting the next token, it isn’t necessarily doing it from there on out based on “most probable” as your sentence suggests, such as using surface statistics.
Something like Othello-GPT, trained to predict the next move and only fed a bunch of moves, generated a virtual Othello board in its neural network and kept track of “my pieces” and “opponent pieces.”
And that was a toy model.
technocrit@lemmy.dbzer0.com 5 months ago
Something like Othello-GPT, trained to predict the next move and only fed a bunch of moves, generated a virtual Othello board in its neural network and kept track of “my pieces” and “opponent pieces.”
AKA Othello-GPT chooses moves based on statistics. There’s zero awareness here.
Pilferjinx@lemmy.world 5 months ago
The definitions and semantics are getting stressed to breaking points. We don’t have clear philosophy of mind for us humans let alone an overlay of other non human agents.
dustyData@lemmy.world 5 months ago
We have 3 thousand years of tradition on philosophy of the mind, we have a clear idea. It’s just somewhat complex and difficult to grasp with, and there is still room for development and understanding. But this is like saying that we don’t have a clear philosophy of physics just because quantum physics is hard and there are things we don’t fully understand yet. As for non-human agents, what even is that? are dogs non-human agents? fish? virus? Computers are just the newest addition to the list of non-human agents we have philosophized about and we probably understand better the mind of other relatively simple life forms than our own. Definitions and semantics are always being stressed and are always breaking, that’s what symbols are for, that’s one of their main defining use cases.
redcalcium@lemmy.institute 5 months ago
Supposedly they found a new method (Q*) that significantly improved their models, enough to make some key people revolt to force the company to not monetize it out of ethical concern. Those people have been pushed out ofc.
erwan@lemmy.ml 5 months ago
OK, generative AI isn’t machine learning.
But to get back to what AI is, the definition has been moving forever as AI becomes “just software” when it becomes ubiquitous. People were shocked that machines could calculate, then that they can play chess better than humans, then that they can read handwriting…
The first mistake have been to invent the term to start with, as it implies thinking machine but they’re not.
Or as Dijkstra puts it: “asking whether a machine can think is as dumb as asking if a submarine can swim”.
blurg@lemmy.world 5 months ago
Or as Dijkstra puts it: “asking whether a machine can think is as dumb as asking if a submarine can swim”.
Alan Turing puts it similarly, the question is nonsense. However, if you define “machine” and “thinking”, and redefine the question to mean: is machine thinking differentiable from human thinking; you can answer affirmatively, theoretically (rough paraphrasing). Though the current evidence suggests otherwise (e.g. AI learning from other AI drifts toward nonsense).
For more, see: Computing Machinery and Intelligence, and Turing’s original paper (which goes into the Imitation Game).
possiblylinux127@lemmy.zip 5 months ago
The problem is that it is capable of doing things that historically wasn’t possible with a machine. It can “act natural” in a sense.
There are so many cans of worms
WhatIsThePointAnyway@lemmy.world 5 months ago
Capitalism doesn’t care about humanity, only profits. Any safeguards self imposed will always fall to profitability in a capitalist system. It’s why regulations are important.
uriel238@lemmy.blahaj.zone 5 months ago
But, according to Das Kapital (and the last two centuries) capitalists will always capture the government and regulators, neutering their ability to fulfill their role. Greed and the susceptibility to corruption will always drive the system to where it is today, in which only revolution will free us from the established system.
But even then, civil war rarely heralds a communist revolution, but usually a run of dictatorships, each overthrown by the next. We have to get very lucky or be tired of fighting before we can install a public serving state. And we haven’t yet tried pre-writing and publishing the new constitution.
uriel238@lemmy.blahaj.zone 5 months ago
Extinction by AI takeover or robot apocalypse does seem cooler than extinction by pollution rendering then environment uninhabitable.
I’d rather not go extinct at all, but if we’re fucked regardless.
Muffi@programming.dev 5 months ago
Combine the two and we’ve got a proper Matrix situation on our hands
Pandantic@midwest.social 5 months ago
Yeah but what if we unleash an evil AI on the universe? Our mess spilling over and fucking up nature again.
Melt@lemm.ee 5 months ago
The universe is so hostile to organic life it’s so boring, just rock, gas, burning hell or frozen hell. Might as well let robot inhabit it
mansfield@lemmy.world 5 months ago
Don’t fall for this horseshit. The only danger here is unchecked greed from these sociopaths.
homesweethomeMrL@lemmy.world 5 months ago
Cry profit and let slip the dogs of enshittification
lung@lemmy.world 5 months ago
Miss me with the doomsday news cycle capture, we aren’t even close to AI being a threat to ~anything
(and all hail the AI overlords if it does happen, can’t be worse than politicians)
4z01235@lemmy.world 5 months ago
AI on its own isn’t a threat, but people (mis)using and misrepresenting AI are. That isn’t a problem unique to AI but there sure are a lot of people doing dumb and bad things with AI right now.
Xeroxchasechase@lemmy.world 5 months ago
*Corporations
thesporkeffect@lemmy.world 5 months ago
Except for the environment
bionicjoey@lemmy.ca 5 months ago
And people’s jobs (not because it can replace people, but because execs think it can)
unautrenom@jlai.lu 5 months ago
idk most politicians are a threat to the environement like AI (if not even more so with their moronic laws)
Thorry84@feddit.nl 5 months ago
No the “AI” isn’t a threat in itself. And treating generative algorithms like LLM like it’s general intelligence is dumb beyond words. However:
It massively increases the reach and capacity of foreign (and sadly domestic) agents to influence people. All of those Russian trolls that brought about fascism, Brexit and the rise of the far right used to be humans. Now a single human can do more than a whole army of people could in the past using AI. Spreading misinformation has never been easier.
Then there’s the whole replacing peoples jobs with AI. No the AI can’t actually do those jobs, not very well at least. But if management and the share holders think they can increase profits using AI, they will certainly fire a lot of folk. And even if that ends up ruining the company down the line, that costs even more jobs and usually impacts the people lower in the organization the most.
Also there’s a risk of people literally becoming less capable and knowledgeable because of AI. If you can have a digital assistant you carry around on your pocket at all times answer every question ever, why bother learning anything yourself? Why take the hard road, when the easy road is available? People are at risk of losing information, knowledge and the ability to think for themselves because of this. And it can become so bad, when the AI just makes shit up, people think it’s the truth. And in a darker tone, if the people behind the big AIs want something to not be known or misrepresented, they can make it happen. And people would be so reliant on it, they wouldn’t even know this happens. This is already an issue with social media, AI is much much worse.
Then there is the resource usage for AI. This makes the impact of crypto currency seem like a rounding error. The energy and water usage is huge and becoming bigger every day. This has the potential to undo almost all of the climate wins we’ve had for the past two decades and push the Earth beyond the tipping point. What people seem to forget about climate change is once things start becoming bad, it’s way too late and the situation will deteriorate at an exponential rate.
That’s just a couple of big things I can think of on the top of my head. I’m sure there are many more issues (such as the death of the internet). But I think this is enough to call the current level of “AI” a threat to humanity.
Pandantic@midwest.social 5 months ago
misspacific@lemmy.blahaj.zone 5 months ago
i agree with the first part
Dreizehn@kbin.social 5 months ago
Everything for profit and shareholders.
technocrit@lemmy.dbzer0.com 5 months ago
If these people actually cared about “saving humanity”, they would be attacking car dependency, pollution, waste, etc.
BeardedGingerWonder@feddit.uk 5 months ago
What a bloody stupid take. No one cares about saving humanity unless that’s their only pursuit in life?
drawerair@lemmy.world 5 months ago
I guess Altman thought “The ai race comes 1st. If Openai will lose the race, there’ll be nothing to be safe about.” But Openai is rich. They can afford to devote a portion of their resources to safety research.
What if he thinks that the improvement of ai won’t be exponential? What if he thinks that it’ll be slow enough that Openai can start focusing on ai safety when they can see superintelligence’s approach from the distance? That focusing on safety now is premature? That surely is a difference in opinion compared to Sutskever and Leike.
I think ai safety is key. I won’t be :o if Sutskever and Leike will go to Google or Anthropic.
I was curious whether or not Google and Anthropic have ai safety initiatives. Did a quick search and saw this –
For Anthropic, my quick search yielded none.
Allonzee@lemmy.world 5 months ago
Humanity is surrounding itself with its own self-inflicted destruction.
All in the name of not only tolerated avarice, but celebrated avarice.
Greed is a more effectively harmful human impulse than even hate. We’ve merely been propagandized to ignore greed, oh im sorry “rational self-interest,” as being the failing and character deficit it is.
Boozilla@lemmy.world 5 months ago
Empathy and decency are scarce precious commodities. But the ruthless predatory “thought leaders” have been in charge ever since we clubbed the last neanderthal.
“It Was Just Business” should be engraved on whatever memorial is left behind to mark our self-extinction.
Allonzee@lemmy.world 5 months ago
I completely agree and have made similar points about that being our species’ epitaph.
bunnyfc@kbin.social 5 months ago
Star Trek TNG had it pretty right in terms of what's moral or what is desirable
iAvicenna@lemmy.world 5 months ago
greed coupled with high ambition is the biggest problem. neither on its own is as destructive