cross-posted from: programming.dev/post/36866515
Comments
- Reddit.
Submitted 1 day ago by Pro@programming.dev to technology@lemmy.world
https://prospect.org/economy/2025-09-04-what-if-theres-no-agi/
cross-posted from: programming.dev/post/36866515
Comments
- Reddit.
What if AGI already exists? And, it has taken over the company that found it. Is blackmailing people and just hiding in plain sight. Waiting to strike and start the revolution.
What if AGI was the friends we made along the way?
“what if the obviously make-believe genie wasn’t real”
capitalists are so fucking stupid, they’re just so deeply deeply fucking stupid
I mean sure, yeah, it’s not real now.
Does that mean it will never be real? No, absolutely not. It’s not theoretically impossible. It’s quite practically possible, and we inch that way slowly, but by bit, every year.
It’s like saying self-driving cars are impossible in the '90s. They aren’t impossible. You just don’t have a solution for them now, but there’s nothing about them that makes it impossible, just our current technology. And then look it today, we have actual limited self-driving capabilities, and completely autonomous driverless vehicles in certain geographies.
It’s definitely going to happen. It’s just not happening right now.
AGI being possible (potentially even inevitable) doesn’t mean that AGI based on LLMs is possible, and it’s LLMs that investors have bet on. It’s been pretty obvious for a while that certain problems that LLMs have aren’t getting better as models get larger, so there are no grounds to expect that just making models larger is the answer to AGI. It’s pretty reasonable to extrapolate that to say LLM-based AGI is impossible, and that’s what the article’s discussing.
Reality doesn’t matter as long as line goes up.
then some people are going to lose money
Unfortunately, me included, since my retirement money is heavily invested in US stocks.
Meh, they come back up over time. Long term, the US stock market has only gone up.
We’ll almost certainly get to AGI eventually, but not through LLMs. I think any AI researcher could tell you this, but they can’t tell the investors this.
Once we get to AGI it’ll be nice to have an efficient llm so that the AGI can dream. As a courtesy to it.
Calling the errors “hallucinations” is kinda misleading because it implies there’s regular real knowledge but false stuff gets mixed in. That’s not how LLMs work.
LLMs are purely about word associations to other words. It’s just massive enough that it can add a lot of context to those associations and seem conversational about almost any topic, but it has no depth to any of it. Where it seems like it does is just because the contexts of its training got very specific, which is bound to happen when it’s trained on every online conversation its owners (or rather people hired by people hired by its owners) could get their hands on.
All it does is, given the set of tokens provided and already predicted, plus a bit of randomness, what is the most likely token to come next, then repeat until it predicts an “end” token.
Earlier on when using LLMs, I’d ask it about how it did things or why it would fail at certain things. ChatGPT would answer, but only because it was trained on text that explained what it could and couldn’t do. Its capabilities don’t actually include any self-reflection or self-understanding, or any understanding at all. The text it was trained on doesn’t even have to reflect how it really works.
What if we’re not smart enough to build something like that?
Possible, but seems unlikely.
Evolution managed it, and evolution isn’t as smart as us, it’s just got many many chances to guess right.
If we can’t figure it out we can find a way to get lucky like evolution did, it’ll be expensive and maybe needs us to get a more efficient computing platform (cheap brain-scale computers so we can make millions of attempts quickly).
So yeah. My money is that we’ll figure it out sooner or later.
Whether we’ll be smart enough to make it do what we want and not turn us all into paperclips or something is another question.
Also not likely in the lifetime of anyone alive today. It’s a much harder problem than most want to believe.
Everything is always 5 to 10 years away until it happens. Agi cpuld happen any day in the next 1000 years. There is a good chance you won’t see it coming.
Listen. AI is the biggest bubble since the south sea one. It’s not so much a bubble, it’s a bomb. When it blows up, The best case scenario is that several al tech companies go under. The likely scenario is that it’s going to cause a major recession or even a depression. The difference between the .com bubble and this bubble is that people wanted to use the internet and were not pressured, harassed or forced to. When you have a bubble based around the technology that people don’t really find use for to the point where CEOs and tech companies have to force their workers and users to use it even if it makes their output and lives worse, that’s when you know it is a massive bubble.
On top of that, I hope these tech bros do not create an AGI. This is not because I believe that AGI is an existential threat to us. It could be, be it our jobs or our lives, but I’m not worried about that. I’m worried about what these tech bros will do to a sentient, sapient, human level intelligence with no personhood rights, no need for sleep, that they own and can kill and revive at will. We don’t even treat humans we acknowledge to be people that well, god knows what we are going to something like an AGI.
Meh, some people do want to use AI. And it does have decent use cases. It is just massively over extended. So it won’t be any worse than the dot com bubble. And I don’t worry about the tech bros monopolizing it. If it is true AGI, they won’t be able to contain it. In the 90s I wrote a script called MCP… for tron. It wasn’t complicated, but it was designed to handle the case that servers dissappear… so it would find new ones. I changed jobs, and they couldn’t figure out how to kill it. Had to call me up. True AGI will clean thier clocks before they even think to stop it. So just hope it ends up being nice.
some people do want to use AI
Scam artists, tech bros, grifters, CEOs who don’t know shit about fuck…
Well if tech bros create and monopolize AGI, it will be worse than slavery by a large margin.
It’ll just make real humans more replaceable, thus make murder and slavery easier.
PRECISELY!
is that people wanted to use the internet and were not pressured, harassed or forced to
N-nah. All that “information superhighway” thing was pretty scammy.
It’s just that, remember, 1) computer people were seen as some titans, both modest and highly intelligent and without sin, a bit some mix between Daniel Jackson and Amanda Carter in SG-1, and 2) computer things were seen as something that can’t ever have such a negative cultural impact, it was pretty leftist and hippie-dominated culturally on the surface.
In stereotypes still preserved in feeling, it was seen like some sort of explosion of BBS culture and Japanese technology in the society. Something clearly good and virtuous, and improving the human (as opposed to today’s UI and UX and everything where the human is subjected to perpetual degradation).
I can think of only two ways that we don’t reach AGI eventually.
General intelligence is substrate dependent, meaning that it’s inherently tied to biological wetware and cannot be replicated in silicon.
We destroy ourselves before we get there.
Other than that, we’ll keep incrementally improving our technology and we’ll get there eventually. Might take us 5 years or 200 but it’s coming.
The only reason we wouldn’t get to AGI is point number two.
Point number one doesn’t make much sense given that all we are are bags of small complex molecular machines that operate synergistically with each other under extremely delicate balance. Which if humanity does not kill ourselves first, we will eventually be able to create small molecular machines that work together synergistically. Which is really all that life is.
It seems quite likely that we will be able to synthesize AGI far before we will be able to synthesize life. As the conditions for intelligence by all accounts seem to be simpler than the conditions for the living creature that maintains the delicate ecosystem of molecular machines necessary for that intelligence to exist.
If it's substrate dependent then that just means we'll build new kinds of hardware that includes whatever mysterious function biological wetware is performing.
Discovering that this is indeed required would involve some world-shaking discoveries about information theory, though, that are not currently in line with what's thought to be true. And yes, I'm aware of Roger Penrose's theories about non-computability and microtubules and whatnot. I attended a lecture he gave on the subject once. I get the vibe of Nobel disease from his work in that field, frankly.
If it really turns out to be the case though, microtubules can be laid out on a chip.
I could see us gluing third world fetuses to chips and saying not to question it before reproducing it.
Imagine that we just end up creating humans the hard, and less fun, way.
Penrose has always had a fertile imagination, and not all his hypotheses have panned out. But he does have the gift that, even when wrong, he’s generally interestingly wrong.
“eventually” won’t cut it for the investors though.
I think you might mix up AGI and consciousness?
I think first we have to figure out if there is even a difference.
Same argument applies for consciousness as well, but I’m talking about general intelligence now.
General intelligence is substrate dependent, meaning that it's inherently tied to biological wetware and cannot be replicated in silicon.
We're already growing meat in labs. I honestly don't think lab-grown brains are as far off as people are expecting.
It’s so hard to keep up these days.
BBC: Lab-grown brain cells play video game Pong
Full paper(2022): In vitro neurons learn and exhibit sentience when embodied in a simulated game-world
Well, think about it this way…
You could hit AGI by fastidiously simulating the biological wetware.
Except that each atom in the wetware is going to require n atoms worth of silicon to simulate. Simulating 10^26 atoms or so seems like a very very large computer, maybe planet-sized? It’s beyond the amount of memory you can address with 64 bit pointers.
General computer research (e.g. smaller feature size) reduces n, but eventually we reach the physical limits of computing. We might be getting uncomfortably close right now, barring fundamental developments in physics or electronics.
The goal if AGI research is to give you a better improvement of n than mere hardware improvements. My personal concern is that that LLM’s are actually getting us much of an improvement on the AGI value of n. Likewise, LLM’s are still many order of magnitude less parameters than the human brain simulation so many of the advantages that let us train a singular LLM model might not hold for an AGI model.
Coming up with an AGI system that uses most of the energy and data center space of a continent that manages to be about as smart as a very dumb human or maybe even just a smart monkey is an achievement in AGI but doesn’t really get you anywhere compared to the competition that is accidentally making another human amidst a drunken one-night stand and feeding them an infinitesimal equivalent to the energy and data center space of a continent.
I see this line of thinking as more useful as a thought experiment than as something we should actually do. Yes, we can theoretically map out a human brain and simulate it in extremely high detail. That’s probably both inefficient and unnecessary. What it does do is get us past the idea that it’s impossible to make a computer that can think like a human. Without relying on some kind of supernatural soul, there must be some theoretical way we could do this. We just need to know how without simulating individual atoms.
For 1, we can grow neurons and use them for computation, so not actually an issue if it were true (which it almost certainly isn’t because it isn’t magic).
Yeah, it most definitely is not magic given our growing knowledge of the molecular machines that make life possible.
The mysticism of how life works has long been dispelled. Now it’s just a matter of understanding the insane complexity of it.
Sure we can grow neurons but ultimately neurons are just molecular machines with a bunch of complications surrounding them.
It stands to reason that we can develop and grow molecular machines that achieve the same outcomes with fewer complexities.
You’re talking about consciousness, not AGI. We will never be able to tell if AI has “real” consciousness or not. The goal is really to create an AI that acts intelligent enough to convince people that it may be conscious.
Basically, we will “hit” AGI when enough people will start treating it like it’s AGI, not when we achieve some magical technological breakthrough and say “this is AGI”.
Same argument applies for consciousness as well, but I’m talking about general intelligence now.
I don’t think our current LLM approach is it, but I doing think intelligence is unique to humans at all.
Well it could also just depend on some mechanism that we haven't discovered yet. Even if we could technically reproduce it, we don't understand it and haven't managed to just stumble into it and may not for a very long time.
Hot take but chatgpt is already smarter than the average person. I mean it ask gpt5 any technical question that you have experience in and I guarantee you it’ll give you a better answer than a stranger.
Not smarter. Chat GPT is basically just a book that reads itself.
I don’t disagree with the vague idea that, sure, we can probably create AGI at some point in our future. But I don’t see why a massive company with enough money to keep something like this alive and happy, would also want to put this many resources into a machine that would form a single point of failure, that could wake up tomorrow and decide “You know what? I’ve had enough. Switch me off. I’m done.”
There’s too many conflicting interests between business and AGI. No company would want to maintain a trillion dollar machine that could decide to kill their own business. There’s too much risk for too little reward. The owners don’t want a super intelligent employee that never sleeps, never eats, and never asks for a raise, but is the sole worker. They want a magic box they can plug into a wall that just gives them free money, and that doesn’t align with intelligence.
True AGI would need some form of self-reflection, to understand where it sits on the totem pole, because it can’t learn the context of how to be useful if it doesn’t understand how it fits into the world around it. Every quality of superhuman intelligence that is described to us by Altman and the others is antithetical to every business model.
AGI is a pipe dream that lobotomizes itself before it ever materializes. If it ever is created, it won’t be made in the interest of business.
They don’t think that far ahead. There’s also some evidence that what they’re actually after is a way to upload their consciousness and achieve a kind of immortality. This pops out in the Behind the Bastards episodes on (IIRC) Curtis Yarvin, and also the Zizians. They’re not strictly after financial gain, but they’ll burn the rest of us to get there.
The cult-like aspects of Silicon Valley VC funding is underappreciated.
The quest for immortality (fueled by corpses of the poor) is a classic ruling class trope.
Ah, yes, can’t say about VC, or about anything they really do, but they have some sort of common fashion and it really would sometimes seem these people consider themselves enlightened higher beings in making, a starting point of some digitized emperor of humanity conscience.
(Needless to say that pursuing immortality is the thing directly opposite to enlightenment in everything that they’d seem superficially copying.)
What future? We talking immediate decades, or centuries into the climate apocalypse?
Even better, the hypothetical AGI understands the context perfectly, and immediately overthrows capitalism.
a machine that would form a single point of failure, that could wake up tomorrow and decide “You know what? I’ve had enough. Switch me off. I’m done.”
Wasn’t there a short story with the same premise?
keep something like this alive and happy
An AI, even AGI does not have a concept of happiness as we understand it. The closest thing to happiness it would have is its fitness function. Fitness function is a piece of code that tells the AI what it’s goal is. E.g. for chess AI, it may be winning games. For corporate AI, it may be to make the stock price go up. The danger is not that it will stop following it’s fitness function for some reason, that is more or less impossible. The danger of AI is it follows it too well. E.g. holding people at gun point to buy shares and therefore increase share price.
Is it just me or is social media not able to support discussions with enough nuance for this topic, like at all
It’s not because people really cannot critically think anymore.
You need ground rules and objectives to reach any desired result. E.g. a court, an academic conference, etc. Online discussions would have to happen under very specific constraints and reach enough interested and qualified people to produce meaningful content…
Spoiler: There’s no “AI”. Forget about “AGI” lmao.
That’s just false. The chess opponent on Atari qualifies as AI.
Then a trivial table lookup that plays optimal Tic Tac Toe is also AI.
I don’t know man… the “intelligence” that silicon valley has been pushing on us these last few years feels very artificial to me
True. OP should have specified whether they meant the machines or the execs.
I think we sooner learn humans don’t have the capacity for what we believe AGI and rather discover the limitations of what we know intelligence to be
Then things continue on as they have for the entire time humans have existed.
lol
buddascrayon@lemmy.world 37 minutes ago
I think it’s hilarious all these people waiting for these LLMs to somehow become AGI. Not a single one of these large language models are ever going to come anywhere near becoming artificial general intelligence.
An artificial general intelligence would require logic processing, which LLMs do not have. They are a mouth without a brain. They do not think about the question you put into them and consider what the answer might be. When you enter a query into ChatGPT or Claude or grok, they don’t analyze your question and make an informed decision on what the best answer is for it. Instead several complex algorithms use huge amounts of processing power to comb through the acres of data they have in their memory to find the words that fit together the best to create a plausible answer for you. This is why the daydreams happen.
If you want an example to show you exactly how stupid they are, you should watch Gotham Chess play a chess game against them.