Open Menu
AllLocalCommunitiesAbout
lotide
AllLocalCommunitiesAbout
Login

Humans' behavior about LLMs is the same as animals with a mirror: they believe there is "another" in there. It's just their reflexion

⁨531⁩ ⁨likes⁩

Submitted ⁨⁨2⁩ ⁨weeks⁩ ago⁩ by ⁨certified_expert@lemmy.world⁩ to ⁨showerthoughts@lemmy.world⁩

source

Comments

Sort:hotnewtop
  • minnow@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    The mirror test is frequently cited as a means of testing sentience.

    OP I think you hit the nail on the head.

    source
    • Aerosol3215@piefed.ca ⁨2⁩ ⁨weeks⁩ ago

      Based on the fact that most people don’t see their interaction with the LLM as gazing into the mirror, am I being led to believe that most people are not sentient???

      source
      • Zorque@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        Based entirely on the opinions of people on niche social media platforms, yes.

        source
        • -> View More Comments
  • schnurrito@discuss.tchncs.de ⁨2⁩ ⁨weeks⁩ ago

    Except it’s not my reflection, it’s a reflection of millions if not billions of humans.

    source
    • Carnelian@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      Except it’s not their reflection, it’s a string of phrases presented to you based partly on the commonality of similar phrases appearing next to one another in the training data, and partly on mysterious black box modifications! Fun!

      source
    • ameancow@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      I like to describe it as a “force multiplier” along the lines of a powered suit.

      You are putting in small inputs, and it’s echoing out in a vast, vast virtual space and being compared and connected with countless billions of possible associations. What you get back is a kind of amplification of what you put in. If you make even remotely leading suggestions in your question or prompt, that tiny suggestion is also going to get massively boosted in the background, this is part of why some LLM’s can go off the rails with some users. If you don’t take care with what exactly you’re putting in, you will get wildly unexpected results.

      source
  • Horsecook@sh.itjust.works ⁨2⁩ ⁨weeks⁩ ago
    [deleted]
    source
    • certified_expert@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      I disagree about the dichotomy. I think you can (1) understand what LLMs actually are. (2) See the value of such technology.

      In both cases being factual (not being deceived) and not being malicious (not attempting to deceive others)

      I think a reasonable use of these tools is as a “sidekick” (you being the main character). Some tasks can be assigned to it so you save some time, but the thinking and the actual mental model of what is being done shall always be your responsibility.

      For example, LLMs are good as an interface to quickly lookup within manuals, books, clarify specific concepts, or find the proper terms for a vague idea (so that you can research the topic using the appropriate terms)

      Of course, this is just an opinion. 100% open to discussion.

      source
      • BanMe@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        I think of it like a nonhuman character, like a character in a book I’m reading. Is it real? No. Is it compelling? Yes. Do I know exactly what it’ll do next? No. Is it serving a purpose in my life? Yes.

        It effectively attends to my requests and even feelings but I do not reciprocate that. I’ve got decades of sci-fi leading me up to this point, the idea of interacting with humanoid robots or AI has been around since my childhood, but it’s never involved attending to the machine’s feelings or needs.

        We need to sort out the boundaries on this, the delusional people who are having “relationships” with AI, getting a social or other emotional fix from it. But that doesn’t mean we have to categorize anyone who uses it as moronic. It’s a tool.

        source
    • Etterra@discuss.online ⁨2⁩ ⁨weeks⁩ ago

      Wait, let’s hear OP out.

      source
    • naught101@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      Marketing is a valid use for AI (because bullshit was always thewod anyway)

      source
  • truthfultemporarily@feddit.org ⁨2⁩ ⁨weeks⁩ ago

    Just think about the fact llms are basically trying to simulate reddit posts and then think again about using them.

    source
  • callyral@pawb.social ⁨2⁩ ⁨weeks⁩ ago

    Related: is there a name for “question bias”?

    Like asking ChatGPT if “is x good?”, and it would reply “Yes, x is good.” but if you ask “is x bad?” it would reply “Yes, x is bad, you’re right.”

    source
    • TheOctonaut@mander.xyz ⁨2⁩ ⁨weeks⁩ ago

      It’s just a leading question.

      source
      • yeahiknow3@lemmy.dbzer0.com ⁨2⁩ ⁨weeks⁩ ago

        It is not a leading question. The answer just happens to be meaningless.

        Asking whether something is good is the vast majority of human concern.

        source
  • GuyIncognito@lemmy.ca ⁨2⁩ ⁨weeks⁩ ago

    I checked with that other gorilla who lives in the bathroom and he says you’re wrong

    source
    • certified_expert@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      lol, Is that the same gorilla that you see in other bathrooms? Or (like me) you meet a new gorilla every time you wash your hands?

      source
      • GuyIncognito@lemmy.ca ⁨2⁩ ⁨weeks⁩ ago

        I think he’s the same guy. I used to try to bust him up but he just kept multiplying into more pieces and then coming back whole every time I saw a new mirror, so I eventually gave up

        source
  • lowspeedchase@lemmy.dbzer0.com ⁨2⁩ ⁨weeks⁩ ago

    This is a great one - although I never see animals worshipping the mirror.

    source
    • Rippin_Farts_And_Or_Breaking_Hearts@lemmy.org ⁨2⁩ ⁨weeks⁩ ago

      I’ve got a duck that prefers to dance in front of a chrome bumper or glass door where he can see his reflection than to go after any potential mates. Possibly he’s worshipping the mirror. Possibly he’s just really vain.

      source
      • lowspeedchase@lemmy.dbzer0.com ⁨2⁩ ⁨weeks⁩ ago

        Nothing wrong with a handsome duck taking a little self affirmation time - he knows his value, he can’t look away.

        source
      • gravitas_deficiency@sh.itjust.works ⁨2⁩ ⁨weeks⁩ ago

        Sounds like he’s ducking handsome

        source
        • -> View More Comments
      • Whats_your_reasoning@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        Your duck:

        Image

        source
    • Hux@lemmy.ml ⁨2⁩ ⁨weeks⁩ ago

      I love the idea of a bunch of woodland creatures (completely unaware of what mirrors are) investing heavily—and aggressively—in mirrors and mirror-related technology.

      source
      • lowspeedchase@lemmy.dbzer0.com ⁨2⁩ ⁨weeks⁩ ago

        Squirrels (lemmings) pooling all of their nuts at the alter, lol.

        source
        • -> View More Comments
    • gravitas_deficiency@sh.itjust.works ⁨2⁩ ⁨weeks⁩ ago

      Or forming romantic attachments to the mirror

      source
      • Wilco@lemmy.zip ⁨2⁩ ⁨weeks⁩ ago

        Uhmm … you never had a pet bird Im guessing?

        Seeing all bird masturbate up against a mirror is just par for the course when you have bird pets. Its gonna be either a mirror, a favorite toy … or you.

        source
      • ameancow@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        Animals aren’t cursed with the human ability to think our way into harmful and unproductive behavior due to conscious re-interpretation of information around us. Except for occasional zoo-animals that fall in love with inanimate objects.

        source
      • lowspeedchase@lemmy.dbzer0.com ⁨2⁩ ⁨weeks⁩ ago

        Ooofff… Good call

        source
  • mriormro@lemmy.zip ⁨2⁩ ⁨weeks⁩ ago

    I,too, like pulling random shit from my ass.

    source
    • certified_expert@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      Hahah, yeah, maybe I am doing that. that’s why it is a shower thought, not a research paper proposal.

      The thought comes from my (kind or recent) study of the algebra/calculus under LLMs (at least the feedforward and backpropagation part of them)

      The interesting part is that my ass is non-differentiable at x=0:

      Lim x→0⁺ δass/δx ≠ Lim x→0⁻ δass/δx

      source
  • ameancow@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    Not nearly enough people understand this about our current models of AI. Even people who think they understand AI don’t understand this, usually because they have been talking to themselves a lot without realizing it.

    source
  • LurkingLuddite@piefed.social ⁨2⁩ ⁨weeks⁩ ago

    ELIZA effect

    source
    • cypherpunks@lemmy.ml ⁨2⁩ ⁨weeks⁩ ago

      from page 7 of Joseph Weizenbaum’s Computer Power and Human Reason: From Judgement to Calculation (1976):

      screenshot of PDF of page 7: Introduction intimate thoughts; clear evidence that people were conversing with the computer as if it were a person who could be appropriately and usefully addressed in intimate terms. I knew of course that people form all sorts of emotional bonds to machines, for example, to mu- sical instruments, motorcycles, and cars. And I knew from long ex- perience that the strong emotional ties many programmers have to their computers are often formed after only short exposures to their machines. What I had not realized is that extremely short exposures to a relatively simple computer program could induce powerful de- lusional thinking in quite normal people. This insight led me to attach new importance to questions of the relationship between the individual and the computer, and hence to resolve to think about them, 3. Another widespread, and to me surprising, reaction to the ELIZA program was the spread of a belief that it demonstrated a general solution to the problem of computer understanding of natu- ral language. In my paper, I had tried to say that no general solution to that problem was possible, ie., that language is understood only in contextual frameworks, that even these can be shared by people to only a limited extent, and that consequently even people are not embodiments of any such general solution. But these conclusions were often ignored, In any case, ELIZA was such a small and simple step. Its contribution was, if any at all, only to vividly underline what many others had long ago discovered, namely, the importance of context to language understanding. The subsequent, much more elegant, and surely more important work of Winograd in computer comprehension of English is currently being misinterpreted just as ELIZA was. This reaction to ELIZA showed me more vividly than anything I had seen hitherto the enormously exaggerated attribu- tions an even well-educated audience is capable of making, even strives to make, to a technology it does not understand. Surely, I thought, decisions made by the general public about emergent tech- nologies depend much more on what that public attributes to such technologies than on what they actually are or can and cannot do. If, as appeared to be the case, the public’s attributions are wildly mis- conceived, then public decisions are bound to be misguided and

      a pdf of the whole book is available here

      source
  • Lost_My_Mind@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    Huh…so what you’re saying is that mirrors are actually AI.

    THAT MAKES A LOT OF SENSE!!! EVERYBODY COVER YOUR MIRRORS!!!

    source
    • XiELEd@piefed.social ⁨1⁩ ⁨week⁩ ago

      Unironically in certain cultures there is a superstition that you should cover your mirrors at night

      source
    • Whats_your_reasoning@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      Laughs in vampire

      source
  • Sunschein@piefed.social ⁨2⁩ ⁨weeks⁩ ago

    image

    source
    • SchwertImStein@lemmy.dbzer0.com ⁨2⁩ ⁨weeks⁩ ago

      Annihilation?

      source
      • Sunschein@piefed.social ⁨2⁩ ⁨weeks⁩ ago

        Yeah. Figured it was a good visual representation of seeing an AI version of ourselves in a mirror.

        source
  • Ironfacebuster@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    My dog used to stare at me through mirrors, so what does that mean for her intelligence? Hyper intelligent. Red heelers will take over the world.

    source
  • CIA_chatbot@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    I find this kind of Anti AI Sentience bigotry horrible!

    source
    • certified_expert@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      Interesting take. Could you elaborate?

      My post comes from the study of the algebra and stats that enable LLMs (well, part of it. i am not done with the “attention”.

      source
      • CIA_chatbot@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        I was making a joke based on my username

        source
  • Supervisor194@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    False. My reflection can’t tell me that pressing the Steam button and X will bring up the keyboard on Steam Deck’s desktop mode.

    source
    • lennee@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      pressing and holding the steam button tells u every steam shortcut

      source
  • Abyssian@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    Except when you leave several LLMs able to communicate with one another they will, on their own, with no instructions, including creating their own unique social norms.

    neurosciencenews.com/ai-llm-social-norms-28928/

    source
    • certified_expert@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      This is nothing else than the reflexion I am talking about. It is not a reflexion of you, the person chatting with the bot, but an “average” reflexion of what humanity has expressed in the data llms have been trained on.

      If a mirror is placed in front of another mirror, the “infinite tunnel” only exists in the mind of the observer.

      source
      • Abyssian@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        Neuroscience News isn’t a conspiracy rag. It’s an article summarizing a research paper, which they link to. So many of you don’t bother to read actual research and instead repeat whatever you’ve seen online about how things work. More parrot than the AI.

        source
        • -> View More Comments
    • SaharaMaleikuhm@feddit.org ⁨2⁩ ⁨weeks⁩ ago

      No.

      source
      • Abyssian@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        The article is summarizing a research paper, which it links to. Neuroscience News isn’t a conspiracy rag.

        source
    • JcbAzPx@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      That’s basically a very advanced flea circus.

      source
  • CovfefeKills@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    Deep thoughts

    source
  • lost_faith@lemmy.ca ⁨2⁩ ⁨weeks⁩ ago

    And here I am practising my smile in the mirror (like that golden retriever)

    source
  • lemmie689@lemmy.sdf.org ⁨2⁩ ⁨weeks⁩ ago

    My dog doesn’t pay any attention to mirrors, or llms.

    source
  • woop_woop@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    If I understand your statement correctly, only the most intelligent creatures would understand that LLM’s are themselves?

    source
    • certified_expert@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      But they are a reflexion of ourselves. If you look at the algebra and stats underneath, you’ll realize that they spit out whatever it is in us

      source
  • Artisian@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    Just noting that the mirror test is a bad way of studying theory of mind.

    en.wikipedia.org/wiki/Mirror_test#Criticism

    It’s interesting as a silly and absurd way humans used to demean other species. But I think it says a lot more about those who use it than the animals.

    source
    • certified_expert@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      Interesting!

      source
  • flandish@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    the “mirror stage” by lacan is worth looking into here but no, I don’t think humans automatically think the llm has a sort of reified other, but as we get past an uncanny valley and into generations growing up with entire personal histories in an llm - I can absolutely see that happening.

    source