Open Menu
AllLocalCommunitiesAbout
lotide
AllLocalCommunitiesAbout
Login

Human-level AI is not inevitable. We have the power to change course

⁨109⁩ ⁨likes⁩

Submitted ⁨⁨3⁩ ⁨days⁩ ago⁩ by ⁨Davriellelouna@lemmy.world⁩ to ⁨technology@lemmy.world⁩

https://www.theguardian.com/commentisfree/ng-interactive/2025/jul/21/human-level-artificial-intelligence

source

Comments

Sort:hotnewtop
  • terrific@lemmy.ml ⁨3⁩ ⁨days⁩ ago

    We’re not even remotely close. The promise of AGI is part of the AI hype machine and taking it seriously is playing into their hands.

    Irrelevant at best, harmful at worst 🤷

    source
    • ZILtoid1991@lemmy.world ⁨1⁩ ⁨day⁩ ago

      “Dude trust me, just give me 40 billion more dollars, lobby for complete deregulation of the industry, and get me 50 more petabytes of data, then we will have a little human in the computer! RealshitGPT will have human level intelligence!”

      source
    • Perspectivist@feddit.uk ⁨2⁩ ⁨days⁩ ago

      We’re not even remotely close.

      That’s just the other side of the same coin whose flip side claims AGI is right around the corner. The truth is, you couldn’t possibly know either way.

      source
      • ExLisper@lemmy.curiana.net ⁨2⁩ ⁨days⁩ ago

        The truth is, you couldn’t possibly know either way.

        I think the argument is we’re not remotely close when considering the specific techniques used by current generation of AI tools. Of course people can make new discovery any day and achieve AGI but it’s a different discussion.

        source
      • terrific@lemmy.ml ⁨1⁩ ⁨day⁩ ago

        That’s true in a somewhat abstract way, but I just don’t see any evidence of the claim that it is just around the corner. I don’t see what currently existing technology can facilitate it. Faster-than-light travel could also theoretically be just around the corner, but it would surprise me if it was, because we just don’t have the technology.

        On the other hand, the people who push the claim that AGI is just around the corner usually have huge vested interests.

        source
    • qt0x40490FDB@lemmy.ml ⁨2⁩ ⁨days⁩ ago

      How do you know we’re not remotely close to AGI? Do you have any expertise in the issue? And, expertise is not “I can download Python libraries and use them” it is “I can explain the mathematics behind what is going on, and understand the technical and theoretical challenges”.

      source
      • AnarchoEngineer@lemmy.dbzer0.com ⁨2⁩ ⁨days⁩ ago

        Engineer here with a CS minor in case you care about ethos: We are not remotely close to AGI.

        I loathe python irrationally (and I guess I’m masochist who likes to reinvent the wheel programming wise lol) so I’ve written my own neural nets from scratch a few times.

        Most common models are trained by gradient descent, but this only works when you have a specific response in mind for certain inputs. You use the difference between the desired outcome and actual outcome to calculate a change in weights that would minimize that error.

        This has two major preventative issues for AGI: input size limits, and determinism.

        The weight matrices are set for a certain number of inputs. Unfortunately you can’t just add a new unit of input and assume the weights will be nearly the same. Instead you have to retrain the entire network. (This problem is called transfer learning if you want to learn more)

        This input constraint is preventative of AGI because it means a network trained like this cannot have an input larger than a certain size. Problematic since the illusion of memory that LLMs like ChatGPT have comes from the fact they run the entire conversation through the net. Also just problematic from a size and training time perspective as increasing the input size exponentially increases basically everything else.

        Point is, current models are only able to simulate memory by literally holding onto all the information and processing all of it for each new word which means there is a limit to its memory unless you retrain the entire net to know the answers you want. (And it’s slow af) Doesn’t sound like a mind to me…

        Now determinism is the real problem for AGI from a cognitive standpoint. The neural nets you’ve probably used are not thinking… at all. They literally are just a complicated predictive algorithm like linear regression. I’m dead serious. It’s basically regression just in a very high dimensional vector space.

        ChatGPT does not think about its answer. It doesn’t have any sort of object identification or thought delineation because it doesn’t have thoughts. You train it on a bunch of text and have it attempt to predict the next word. If it’s off, you do some math to figure out what weight modifications would have lead it to a better answer.

        All these models do is what they were trained to do. Now they were trained to be able to predict human responses so yeah it sounds pretty human. They were trained to reproduce answers on stack overflow and Reddit etc. so they can answer those questions relatively well. And hey it is kind of cool that they can even answer some questions they weren’t trained on because it’s similar enough to the questions they weren’t trained on… but it’s not thinking. It isn’t doing anything. The program is just multiplying numbers that were previously set by an input to find the most likely next word.

        This is why LLMs can’t do math. Because they don’t actually see the numbers, they don’t know what numbers are. They don’t know anything at all because they’re incapable of thought. Instead there are simply patterns in which certain numbers show up and the model gets trained on some of them but you can get it to make incredibly simple math mistakes by phrasing the math slightly differently or just by surrounding it with different words because the model was never trained for that scenario.

        Models can only “know” as much as what was fed into them and hey sometimes those patterns extend, but a lot of the time they don’t. And you can’t just say “you were wrong” because the model isn’t transient (capable of changing from inputs alone). You have to train it with the correct response in mind to get it to “learn” which again takes time and really isn’t learning or intelligence at all.

        Now there are some more exotic neural networks architectures that could surpass these limitations.

        Currently I’m experimenting with Spiking Neural Nets which are much more capable of transfer learning and more closely model biological neurons along with other cool features like being good with temporal changes in input.

        However, there are significant obstacles with these networks and not as much research because they only run well on specialized hardware (because they are meant to mimic biological neurons who run simultaneously) and you kind of have to train them slowly.

        You can do some tricks to use gradient descent but doing so brings back the problems of typical ANNs (though this is still possibly useful for speeding up ANNs by converting them to SNNs and then building the neuromorphic hardware for them).

        SNNs with time based learning rules (typically some form of STDP which mimics Hebbian learning as per biological neurons) are basically the only kinds of neural nets that are even remotely capable of having thoughts and learning (changing weights) in real time. Capable as in “this could have discrete time dependent waves of continuous self modifying spike patterns which could theoretically be thoughts” not as in “we can make something that thinks.”

        Like these neural nets are good with sensory input and that’s about as far as we’ve gotten (hyperbole but not by that much). But these networks are still fascinating, and they do help us test theories about how the human brain works so eventually maybe we’ll make a real intelligent being with them, but that day isn’t even on the horizon currently

        In conclusion, we are not remotely close to AGI. Current models that seem to think are verifiably not thinking and are incapable of it from a structural standpoint. You cannot make an actual thinking machine using the current mainstream model architectures.

        The closest alternative that might be able to do this (as far as I’m aware) is relatively untested and difficult to prototype (trust me I’m trying). Furthermore the requirements of learning and thinking largely prohibit the use of gradient descent or similar algorithms meaning training must be done on a much more rigorous and time consuming basis that is not economically favorable. Ergo, we’re not even all that motivated to move towards AGI territory.

        Lying to say we are close to AGI when we aren’t at all close, however, is economically favorable which is why you get headlines like this.

        source
        • -> View More Comments
      • terrific@lemmy.ml ⁨2⁩ ⁨days⁩ ago

        Do you have any expertise on the issue?

        I hold a PhD in probabilistic machine learning and advise businesses on how to use AI effectively for a living so yes.

        IMHO, there is simply nothing indicating that it’s close. Sure LLMs can do some incredibly clever sounding word-extrapolation, but the current “reasoning models” still don’t actually reason. They are just LLMs with some extra steps.

        There is lots of information out there on the topic so I’m not going to write a long justification here. Gary Marcus has some good points if you want to learn more about what the skeptics say.

        source
        • -> View More Comments
      • Eranziel@lemmy.world ⁨2⁩ ⁨days⁩ ago

        Part of this is a debate on what the definition of intelligence and/or consciousness is, which I am not qualified to discuss. (I say “discuss” instead of “answer” because there is not an agreed upon answer to either of those.)

        That said, one of the main purposes of AGI would be able to learn novel subject matter, and to come up with solutions to novel problems. No machine learning tool we have created so far is capable of that, on a fundamental level. They require humans to frame their training data by defining what the success criteria is, or they spit out the statistically likely human-like response based on all of the human-generated content they’ve consumed.

        In short, they cannot understand a concept that humans haven’t yet understood, and can only echo solutions that humans have already tried.

        source
        • -> View More Comments
    • cyd@lemmy.world ⁨1⁩ ⁨day⁩ ago

      In some dimensions, current day LLMs are already superintelligent. They are extremely good knowledge retrieval engines that can far outperform traditional search engines, once you learn how properly to use them. No, they are not AGIs, because they’re not sentient or self-motivated, but I’m not sure those are desirable or useful dimensions of intellect to work towards anyway.

      source
      • terrific@lemmy.ml ⁨1⁩ ⁨day⁩ ago

        I think that’s a very generous use of the word “superintelligent”. They aren’t anything like what I associate with that word anyhow.

        I also don’t really think they are knowledge retrieval engines. I use them extensively in my daily work, for example to write emails and generate ideas. But when it comes to facts they are flaky at best. It’s more of a free association game than knowledge retrieval IMO.

        source
  • Asafum@feddit.nl ⁨3⁩ ⁨days⁩ ago

    Ummm no? If moneyed interests want it then it happens. We have absolutely no control over whether it happens. Did we stop Recall from being forced down our throats with windows 11? Did we stop Gemini from being forced down our throats?

    If capital wants it capital gets it. :(

    source
    • drapeaunoir@lemmy.dbzer0.com ⁨3⁩ ⁨days⁩ ago

      😳 unless we destroy capitalism? 👉🏾👈🏾

      source
      • masterofn001@lemmy.ca ⁨3⁩ ⁨days⁩ ago

        The only problem with destroying capitalism is deciding who gets all the nukes.

        source
        • -> View More Comments
    • BroBot9000@lemmy.world ⁨3⁩ ⁨days⁩ ago

      Use Linux and don’t have any of those issues.

      Get off the capitalist owned platforms.

      source
    • scarabic@lemmy.world ⁨2⁩ ⁨days⁩ ago

      Couldn’t we have a good old fashioned butlerian jihad?

      source
    • qt0x40490FDB@lemmy.ml ⁨2⁩ ⁨days⁩ ago

      In the US, sure, but there have been class revolts in other nations. I’m not saying they lead to good outcomes, but king Louis XVI was rich. And being rich did not save him. There was a capitalist class in China during the cultural revolution. They didn’t make it through. If it means we won’t go extinct, why can we have a revolution to prevent extinction?

      source
  • gandalf_der_12te@discuss.tchncs.de ⁨2⁩ ⁨days⁩ ago

    AI will not threaten humans due to sadism or boredom, but because it takes jobs and makes people jobless.

    The real crisis is one of sinking wages, lack of social safety nets, and lack of future perspective for workers. That’s what should actually be discussed.

    source
    • Vinstaal0@feddit.nl ⁨2⁩ ⁨days⁩ ago

      Not sure if we will even really notice that in our lifetime, it is taking decades to get things like invoice processing to automate. Heck in the US they can’t even get proper bank connections made.

      Also, tractors have replaced a lot of workers on the land, computers have both lost a lot of jobs in offices and created a lot at the same time.

      Jobs will change, that’s for sure and I think most of the heavy labour jobs will become more expensive since they are harder to replace.

      source
    • Zorque@lemmy.world ⁨2⁩ ⁨days⁩ ago

      But scary robots will take over the world! That’s what all the movies are about! If it’s in a movie, it has to be real.

      source
  • Perspectivist@feddit.uk ⁨2⁩ ⁨days⁩ ago

    The path to AGI seems inevitable - not because it’s around the corner, but because of the nature of technological progress itself. Unless one of two things stops us, we’ll get there eventually:

    1. Either there’s something fundamentally unique about how the biological brain processes information - something that cannot, even in principle, be replicated in silicon,

    2. Or we wipe ourselves out before we get the chance.

    Barring those, the outcome is just a matter of time. This argument makes no claim about timelines - only trajectory. Even if we stopped AI research for a thousand years, it’s hard to imagine a future where we wouldn’t eventually resume it. That’s what humans do; improve our technology.

    The article points to cloning as a counterexample but that’s not a technological dead end, that’s a moral boundary. If one think we’ll hold that line forever, I’d call that naïve. When it comes to AGI, there’s no moral firewall strong enough to hold back the drive toward it. Not permanently.

    source
    • rottingleaf@lemmy.world ⁨2⁩ ⁨days⁩ ago

      something that cannot, even in principle, be replicated in silicon

      As if silicon were the only technology we have to build computers.

      source
      • Perspectivist@feddit.uk ⁨2⁩ ⁨days⁩ ago

        Did you genuinely not understand the point I was making, or are you just being pedantic? “Silicon” obviously refers to current computing substrates, not a literal constraint on all future hardware. If you’d prefer I rewrite it as “in non-biological substrates,” I’m happy to oblige - but I have a feeling you already knew that.

        source
        • -> View More Comments
  • markovs_gun@lemmy.world ⁨2⁩ ⁨days⁩ ago

    Why would we want to? 99% of the issues people have with “AI” are just problems with society more broadly that AI didn’t really cause, only exacerbated. I think it’s absurd to just reject this entire field because of a bunch of shitty fads going on right now with LLMs and image generators.

    source
  • SparrowHawk@feddit.it ⁨2⁩ ⁨days⁩ ago

    A lot of people making baseless claims about it being inevitable…i mean it could happen but the hard problem of consciousness is not inevitable to solve

    source
  • Codpiece@feddit.uk ⁨2⁩ ⁨days⁩ ago

    Human level? That’s not setting the bar very high. Surely the aim would be to surpass human, or why bother?

    source
    • Outwit1294@lemmy.today ⁨2⁩ ⁨days⁩ ago

      Yeah. Cheap labor is so much better than this bullshit

      source
  • SpicyLizards@reddthat.com ⁨2⁩ ⁨days⁩ ago

    We can change course if we can change course on capitalism

    source
  • Etterra@discuss.online ⁨2⁩ ⁨days⁩ ago

    Honestly I welcome our AI overlords. They can’t possibly fuck things up harder than we have.

    source
    • AngryRobot@lemmy.world ⁨2⁩ ⁨days⁩ ago

      Can’t they?

      source
  • Deathgl0be@lemmy.world ⁨2⁩ ⁨days⁩ ago

    It’s just a cash grab to take peoples jobs and give it to a chat bot that’s fed Wikipedia’s data on crack.

    source
    • Perspectivist@feddit.uk ⁨2⁩ ⁨days⁩ ago

      Don’t confuse AGI with LLMs. Both being AI systems is the only thing they have in common. They couldn’t be further apart when it comes to cognitive capabilities.

      source