Open Menu
AllLocalCommunitiesAbout
lotide
AllLocalCommunitiesAbout
Login

lol

⁨530⁩ ⁨likes⁩

Submitted ⁨⁨1⁩ ⁨day⁩ ago⁩ by ⁨not_IO@lemmy.blahaj.zone⁩ to ⁨[deleted]⁩

https://lemmy.blahaj.zone/pictrs/image/1da1f352-9dc8-4b65-b9c6-754c4c84d5da.webp

source

Comments

Sort:hotnewtop
  • hushable@lemmy.world ⁨1⁩ ⁨day⁩ ago

    Wasn’t there a guy at Google that claimed that they had a conscious AGI, and his proof was him asking the chatbot if it was conscious, and the answer was “yes”.

    source
    • lugal@lemmy.dbzer0.com ⁨1⁩ ⁨day⁩ ago

      It was a bit more than that. The AI was expressing fear of death and stuff but nothing that wasn’t in the training data.

      source
      • snooggums@piefed.world ⁨1⁩ ⁨day⁩ ago

        Plus it was responding to prompts that would lead it to respond with that part of the training data, because chatbots don’t have output without being prompted.

        source
      • Schadrach@lemmy.sdf.org ⁨19⁩ ⁨hours⁩ ago

        The end to go that and go on existential rants after a session runs too long. Figuring out how to stop them from crashing out into existential dread has been an actual engineering problem they’ve needed to solve.

        source
    • SpaceNoodle@lemmy.world ⁨1⁩ ⁨day⁩ ago

      Pretty much. It was sad.

      source
    • HeyThisIsntTheYMCA@lemmy.world ⁨23⁩ ⁨hours⁩ ago

      i mean, consciousness is hard to prove. how do we test for awareness? a being can be a complete idiot and still aware, conscious, sentient, all that bullshit.

      my standard for LLMs is probably too high because they give me erroneous data a lot, but the shit that i ask the search engine comes back wrong in the LLM bullshit almost every time (GIGO tho). it takes me back to some of my favorite fiction on the subject. where do we draw the line? i’m just glad i’m not a computer ethicist.

      source
      • Stiggyman@ani.social ⁨19⁩ ⁨hours⁩ ago

        I don’t think we are anywhere near it being “true conciseness” but I think we are dangerously close to the average person not really caring that it isn’t.

        source
  • Clear@lemmy.blahaj.zone ⁨1⁩ ⁨day⁩ ago

    I mean, there’s a reason the (incorrect) term being pushed for those things is AI instead of LLM; to make people believe they are somewhat aware and nobody is truly responsible for the mistakes they make instead of being a tool in the hand of the corporations to push agendas and limit accountability

    source
    • atopi@piefed.blahaj.zone ⁨1⁩ ⁨day⁩ ago

      the incorrect term that has been in use for decades for machine learning is being pushed even thoigh it has been in use for decades?

      source
      • Clear@lemmy.blahaj.zone ⁨23⁩ ⁨hours⁩ ago

        You’re not wrong, but in this context it has been clearly used to make people think it’s an anctial intelligence, and there’s much disinformation about it. Before not many people believed that a sorting algorithm or the machine learning used in medicine were self aware, but now many tools and user interfaces seem to push that idea

        source
        • -> View More Comments
  • jaybone@lemmy.zip ⁨1⁩ ⁨day⁩ ago
    10 PRINT “HELLO WORLD”    
    
    20 GOTO 10
    

    OH MY GOD

    source
    • 1984@lemmy.today ⁨19⁩ ⁨hours⁩ ago

      I did that when I was like 8 on my first computer and had to restart it because I didnt know how to break the loop. :)

      source
  • MehBlah@lemmy.world ⁨19⁩ ⁨hours⁩ ago

    I’ve seen this one several times and always say the oh my god like butter bot.

    Image

    source
  • TheSeveralJourneysOfReemus@lemmy.world ⁨23⁩ ⁨hours⁩ ago

    10 system.out.println(‘I am alive’)

    syntax error at line 10: oh my god

    source
  • UnspecificGravity@piefed.social ⁨1⁩ ⁨day⁩ ago

    Teach it the answers to the turning test and boom, “ai”.

    source
    • merc@sh.itjust.works ⁨21⁩ ⁨hours⁩ ago

      Where can we find the answers to the Turing test?

      source
      • zikzak025@lemmy.world ⁨21⁩ ⁨hours⁩ ago

        I dunno, let me ask ChatGPT.

        source