Open Menu
AllLocalCommunitiesAbout
lotide
AllLocalCommunitiesAbout
Login

Nvidia CEO Jensen Huang says ‘I think we’ve achieved AGI’

⁨127⁩ ⁨likes⁩

Submitted ⁨⁨3⁩ ⁨weeks⁩ ago⁩ by ⁨return2ozma@lemmy.world⁩ to ⁨technology@lemmy.world⁩

https://www.theverge.com/ai-artificial-intelligence/899086/jensen-huang-nvidia-agi

source

Comments

Sort:hotnewtop
  • Peruvian_Skies@sh.itjust.works ⁨3⁩ ⁨weeks⁩ ago

    Sure you do. It’s not at all a transparent attempt to prolong the bubble.

    source
  • Technus@lemmy.zip ⁨3⁩ ⁨weeks⁩ ago

    I only have a rather high level understanding of current AI models, but I don’t see any way for the current generation of LLMs to actually be intelligent or conscious.

    They’re entirely stateless, once-through models: any activity in the model that could be remotely considered “thought” is completely lost the moment the model outputs a token. Then it starts over fresh for the next token with nothing but the previous inputs and outputs (the context window) to work with.

    That’s why it’s so stupid to ask an LLM “what were you thinking”, because even it doesn’t know! All it’s going to do is look at what it spat out last and hallucinate a reasonable-sounding answer.

    source
    • thinkercharmercoderfarmer@slrpnk.net ⁨3⁩ ⁨weeks⁩ ago

      There’s no reason an LLM couldn’t be hooked up to a database, where it can save outputs and then retrieve them again to “think” further about them. In fact, any LLM that can answer questions about previous prompts/responses has to be able to do this. If you prompted an LLM to review all of it’s database entries, generate a new response based on that data, then save that output to the database and repeat at regular intervals, I could see calling that a kind of thinking. If you do the same process but with the whole model and all the DB entries, that’s in the region of what I’d call a strange loop. Is that AGI? I don’t think so, but I also don’t know how I would define AGI, or if I’d recognize it if someone built it.

      source
      • Technus@lemmy.zip ⁨3⁩ ⁨weeks⁩ ago

        If you prompted an LLM to review all of it’s database entries, generate a new response based on that data, then save that output to the database and repeat at regular intervals, I could see calling that a kind of thinking.

        That’s kind of what the current agentic AI products like Claude Code do. The problem is context rot. When the context window fills up, the model loses the ability to distinguish between what information is important and what’s not, and it inevitably starts to hallucinate.

        The current fixes are to prune irrelevant information from the context window, use sub-agents with their own context windows, or just occasionally start over from scratch. They’ve also developed conventional AGENTS.md and CLAUDE.md files where you can store long-term context and basically “advice” for the model, which is automatically read into the context window.

        However, I think an AGI inherently would need to be able to store that state internally, to have memory circuits, and “consciousness” circuits that are connected in a loop so it can work on its own internally encoded context. And ideally it would be able to modify its own weights and connections to “learn” in real time.

        The problem is that would not scale to current usage because you’d need to store all that internal state, including potentially a unique copy of the model, for every user. And the companies wouldn’t want that because they’d be giving up control over the model’s outputs since they’d have no feasible way to supervise the learning process.

        source
        • -> View More Comments
      • ag10n@lemmy.world ⁨3⁩ ⁨weeks⁩ ago

        That’s what an LLM is, a database of words using vectors.

        You’re still limited by the context window in your example, giving it another source of information doesn’t do anything than give more context.

        source
        • -> View More Comments
      • SparroHawc@lemmy.zip ⁨2⁩ ⁨weeks⁩ ago

        You still lose the internal state between each token in the database output. It would let it plan, but it would still be externalizing that planning, one token at a time. Condensing all of the internal state into a single token at a time still means huge losses in detail as well as fragmentation of responses, resulting in all the problems that you see with LLMs.

        Somehow the actual internal state needs to not only be preserved, but fed back into itself. That’s how brains work. Condensing it into tokens isn’t enough.

        source
    • Modern_medicine_isnt@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      I agree, ut not because of lost state. As mentioned by others, state can be managed. You could also just do a feedback loop. These improve, but don’t solve. The issue is that it doesn’t understand. You mention that it is just a word predictor. And that is the heart of it. It predicts based on odds more or less, not on understanding. That said, it has room to improve. I think having lots and lots of agents that are highly specialized is probably the key. The more narrow the focus, the closer prediction comes to fact. Then throw in asking 5 versions of the agent the same question and tossing the outliers and you should get pretty useful. Not AGI, but useful. The issue is that with current technology, that is simply too expensive. So a breakthrough in the expense of current AI is needed first, then we can get more useful AI. AGI will be a significantly different technology.

      source
      • Technus@lemmy.zip ⁨2⁩ ⁨weeks⁩ ago

        The conversion of the output to tokens inherently loses a lot of the information extracted by the model and any intermediate state it has synthesized (what it “thinks” of the input).

        Until the model is able to retain its own internal state and able to integrate new information into that state as it receives it, all it will ever be able to do is try to fill in the blanks.

        source
        • -> View More Comments
  • RedFrank24@piefed.social ⁨3⁩ ⁨weeks⁩ ago

    So why do we need Jensen Huang?

    source
    • MrVilliam@sh.itjust.works ⁨3⁩ ⁨weeks⁩ ago

      Exactly. CEO is maybe the easiest job for an AI to take over, so an AGI is possibly the most perfect candidate for that role.

      Put up or shut up, tech bro CEOs. Replace yourself if it’s so fucking amazing.

      source
      • kkj@lemmy.dbzer0.com ⁨3⁩ ⁨weeks⁩ ago

        AIs can’t play golf.

        source
        • -> View More Comments
    • wewbull@feddit.uk ⁨3⁩ ⁨weeks⁩ ago

      Why do we need any of them? They’ve completed the job. All future plans cancelled.

      source
  • meme_historian@lemmy.dbzer0.com ⁨3⁩ ⁨weeks⁩ ago

    Fridman, the podcast’s host, defines AGI as an AI system that’s able to “essentially do your job,” as in start, grow, and run a successful tech company worth more than $1 billion. He then asks Huang when he believes AGI will be real — asking if it’s, say, five, 10, 15, or 20 years away — and Huang responds, “I think it’s now. I think we’ve achieved AGI.”

    So we’ve achieved AGI in the sense that it could replace a nonsensical fart-sniffing clown, hyping a horde of morons into valuating a company at orders of magnitude it’s actual worth?

    source
  • baggachipz@sh.itjust.works ⁨2⁩ ⁨weeks⁩ ago

    Image

    source
  • IchNichtenLichten@lemmy.wtf ⁨2⁩ ⁨weeks⁩ ago

    If I was a NVDA investor, I’d be worried. This clown is doing nothing but gaslighting and lying these days.

    source
    • cheat700000007@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      But you’re wrong, you’re all wrong!

      source
  • Kolanaki@pawb.social ⁨3⁩ ⁨weeks⁩ ago

    Average Gaslighting Idiot.

    AKA “a CEO.”

    source
  • SoloCritical@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    No… you haven’t.

    source
  • AudaciousArmadillo@piefed.blahaj.zone ⁨3⁩ ⁨weeks⁩ ago

    Oh yes we have achieved AGI! But what we really need is Artificial General Super Intelligence! Just another trillion and it will be useful bro!

    source
  • entropiclyclaude@lemmy.wtf ⁨2⁩ ⁨weeks⁩ ago

    These fuckers will claim whatever nonsense to keep themselves relevant enough to take on more debt before they collapse.

    source
    • Rekorse@sh.itjust.works ⁨2⁩ ⁨weeks⁩ ago

      They are going to create a success story where someone becomes a billionaire with an AI doing everything. Then idiots will chase that dream for a hundred years and fill these rich fucks bank accounts.

      source
    • awake@lemmy.wtf ⁨2⁩ ⁨weeks⁩ ago

      Looking at their history they were always able to create markets for their GPUs and AI has been obviously incredible for them. There will be the next hot thing after AI and they’ll try to have that, too. The alternatives to CUDA are not there yet, ROCm is still lacking and fiddly. I see a lot of things happening but NVIDIA collapsing for whatever reason is not part of that.

      source
    • fierysparrow89@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      I agree, they start to sound desperate to keep their current momentum going. I think the bubble will burst soon. Things look solid until they’re not.

      source
  • rizzothesmall@sh.itjust.works ⁨2⁩ ⁨weeks⁩ ago

    Literally the story above this in my feed is OpenAI shutting down expensive services 😂

    You goofy goobers

    source
  • CeeBee_Eh@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    This guy has completely lost the plot. I don’t think it’s possible to be even more disconnected from reality.

    source
  • MonkderVierte@lemmy.zip ⁨3⁩ ⁨weeks⁩ ago

    The Turing thing again? Like, lot’s of dog owners could swear, the dog is smarter than a cat. But dogs are only better at reading their human.

    source
    • wewbull@feddit.uk ⁨3⁩ ⁨weeks⁩ ago

      Cats may be able to read their human just as well or better, but as they don’t give a shit, there’s no feedback to base anything on.

      source
  • ThunderComplex@lemmy.today ⁨2⁩ ⁨weeks⁩ ago

    >You think you’ve achieved AGI
    >I know you haven’t

    We are not the same

    source
  • kewjo@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    if agi then why still jobs?

    source
    • VindictiveJudge@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      Fun fact: if true AGI were a thing, those AI programs would be people and not paying them for their work would be slavery.

      source
      • CheeseNoodle@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        This is honestly one of the scarier parts about the rhetoric, they’re basically implying they would happily enslave a sentient being.

        source
        • -> View More Comments
  • Frenchgeek@lemmy.ml ⁨2⁩ ⁨weeks⁩ ago

    Started lying at the second word, then.

    source
  • Modern_medicine_isnt@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    That man is a verbal slut. He will say anything.

    source
    • HertzDentalBar@lemmy.blahaj.zone ⁨2⁩ ⁨weeks⁩ ago

      Maybe he’s the AI? Hence why he just says shit investors want to hear.

      source
      • Modern_medicine_isnt@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        Even the AI doesn’t say as many bullshit things as he does. Though I guess if you gave it the instructions “say anything that might make the nvidia stock price go up” then an AI might say the bullshit he does.

        source
        • -> View More Comments
  • Dindonmasker@sh.itjust.works ⁨3⁩ ⁨weeks⁩ ago

    Guys i think i just found AGI in my gramp’s old stuff.

    source
  • andallthat@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    “my chatbot told me so!”

    source
  • acosmichippo@lemmy.world ⁨3⁩ ⁨weeks⁩ ago

    fart sniffer

    source
  • PushButton@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    His can we take this idiot seriously; slop DLSS, tgen telling us we are wrong about this (the buddy telling me what I prefer), then we achieved AGI…

    How low can he falls?

    source
  • baller_w@lemmy.zip ⁨2⁩ ⁨weeks⁩ ago

    Worth a read if anyone is interested: newyorker.com/…/what-is-claude-anthropic-doesnt-k…

    My favorite part is Anthropic has a bot in the cafeteria that orders what staff request and if the bank balance goes to zero or negative, then it loses and has to close up shop.

    This far, nearly all employees have a 1” tungsten cube on their desk that some managed to get for free with a fake 100% off coupon.

    It’s a fun experiment in what happens when these agents start doing things in the real world and I commend Anthropic for putting it on display. A real hype train killer.

    As a technologist, I work with them all day, every day. I wouldn’t trust them to do my laundry without oversight, let alone run a business.

    source
  • NotMyOldRedditName@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    How many R’s are in strawberry?

    source
  • neclimdul@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    Doubt

    source
  • Formfiller@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    AI is Wack

    source
  • duncan_bayne@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    I’ll believe him when he tears off his skin suit.

    source
  • Avicenna@programming.dev ⁨2⁩ ⁨weeks⁩ ago

    Eh I was wondering whose turn was it to claim it this year. Turns out it is another guy who is balls deep invested in AI.

    source