Open Menu
AllLocalCommunitiesAbout
lotide
AllLocalCommunitiesAbout
Login

The Guardian view on granting legal rights to AI: humans should not give house-room to an ill-advised debate | Editorial

⁨109⁩ ⁨likes⁩

Submitted ⁨⁨5⁩ ⁨days⁩ ago⁩ by ⁨LadyButterfly@reddthat.com⁩ to ⁨technology@lemmy.world⁩

https://www.theguardian.com/commentisfree/2026/jan/07/the-guardian-view-on-granting-legal-rights-to-ai-humans-should-not-give-house-room-to-an-ill-advised-debate

source

Comments

Sort:hotnewtop
  • xenomor@lemmy.world ⁨5⁩ ⁨days⁩ ago

    Ooh, while we’re at it, let’s also address corporate personhood.

    source
    • FauxLiving@lemmy.world ⁨5⁩ ⁨days⁩ ago

      I don’t think we have time to build a guillotine that big

      source
      • deliriousdreams@fedia.io ⁨5⁩ ⁨days⁩ ago

        Not with that attitude we don't.

        source
        • -> View More Comments
  • falseWhite@lemmy.world ⁨5⁩ ⁨days⁩ ago

    If AI is granted rights, does it mean we could punish them when they commit a crime? E.g. shut them down? If so, I’m okay with that.

    Generate CSAM? - Permanent shut down.

    Instruct someone to kill themselves? - Permanent shutdown.

    Give someone instructions on making a bomb? - Permanent shutdown.

    But it would be better if the creators are punished for making these vile and unsafe AI.

    source
    • architect@thelemmy.club ⁨5⁩ ⁨days⁩ ago

      Rights don’t mean you get punished if you do something wrong.

      They already got everyone to agree (the collective, not individual) that property has rights and it’s equivalent to violence to so much as burn a trash can.

      So, obviously, you wouldn’t think about committing violence against corporate property would you, citizen?

      source
  • xthexder@l.sw0.com ⁨4⁩ ⁨days⁩ ago

    “A computer can never be held accountable, therefore a computer must never make a management decision.”

    – IBM Training Manual, 1979

    We’re going so backwards…

    source
    • GreatWhite_Shark_EarthAndBeingsRightsPerson@piefed.social ⁨7⁩ ⁨hours⁩ ago

      BINGO!

      source
    • EndlessNightmare@reddthat.com ⁨4⁩ ⁨days⁩ ago

      A computer’s inability to be held accountable is a key feature for those wishing to use AI for nefarious purposes.

      source
      • GreatWhite_Shark_EarthAndBeingsRightsPerson@piefed.social ⁨7⁩ ⁨hours⁩ ago

        BINGO!

        source
    • gandalf_der_12te@discuss.tchncs.de ⁨4⁩ ⁨days⁩ ago

      The thing with taking responsibility is that it isn’t actually about punishing a potential maldoer.

      It’s to ensure that a safe outcome is guaranteed (as much as realistically possible). If you have a fire-proof door that automatically seals itself air-tight in case of a fire and stops the fire that way, that door is considered responsible too. Even though it doesn’t have a single living cell in it.

      source
  • floquant@lemmy.dbzer0.com ⁨4⁩ ⁨days⁩ ago

    Wait, what, who? Is someone seriously proposing giving legal rights to fucking LLMs? Is it fucking Sam Altman again?

    source
  • gandalf_der_12te@discuss.tchncs.de ⁨4⁩ ⁨days⁩ ago

    By the way, granting AI personhood would not mean it gets special privileges or extra protection or sth.

    Companies are legally speaking persons too. That doesn’t mean that people recognize them as “alive and sentient”. It merely means that companies are able to possess property, file lawsuits, and such. Not more, not less.

    source