Open Menu
AllLocalCommunitiesAbout
lotide
AllLocalCommunitiesAbout
Login

LLMDeathCount.com

⁨340⁩ ⁨likes⁩

Submitted ⁨⁨3⁩ ⁨weeks⁩ ago⁩ by ⁨brianpeiris@lemmy.ca⁩ to ⁨technology@lemmy.world⁩

https://llmdeathcount.com/

source

Comments

Sort:hotnewtop
  • snoons@lemmy.ca ⁨3⁩ ⁨weeks⁩ ago

    Image

    source
  • lemmie689@lemmy.sdf.org ⁨3⁩ ⁨weeks⁩ ago

    Went up by one already, I only saw this a little earlier today, was at 13, now14.

    source
  • DasFaultier@sh.itjust.works ⁨2⁩ ⁨weeks⁩ ago

    Shit, I just read the link name and was hoping for a list of AI companies that have died.

    This shit’s dark…

    source
  • Prove_your_argument@piefed.social ⁨3⁩ ⁨weeks⁩ ago

    How many people decided to end their life by using methods they googled?

    I’m sure google is a bigger loss leader than any ai company… so far anyway. Even beyond search results, the societal impact of so many things the do overtly and covertly for themselves and other organizations.

    Not trying to justify anything, billionaire owned everything is terrible with few exceptions. In the early days of web search many controversies like this were mentioned, but the reality is that a screwdriver is a great tool, even if someone can lose a life from one. As can be these tools.

    source
    • Manjushri@piefed.social ⁨3⁩ ⁨weeks⁩ ago

      How many people has Google convinced to kill themselves? That is the relevant question. Looking up the means to do the deed on Google is very different from being talked into doing it by an LLM that you believe you can trust.

      source
    • starman2112@lemmy.world ⁨3⁩ ⁨weeks⁩ ago

      Google doesn’t tell you that killing yourself is a good idea and that you shouldn’t talk to anyone else about your suicidal ideation

      source
      • Credibly_Human@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        Nor do any llms I’ve ever seen that is immediately accessible.

        It also doesnt matter. AI isn’t killing anyone with those any more than call of duty lobbies are killing people.

        source
      • chunes@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        Plenty of its search results do

        source
      • WorldsDumbestMan@lemmy.today ⁨2⁩ ⁨weeks⁩ ago

        Claude freaks out any time I even hint I’m not happy about my life. They lobotomized it so hard.

        source
      • Auth@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        Google doesn’t tell you that killing yourself is a good

        It does now! Thanks Gemini

        source
      • echodot@feddit.uk ⁨2⁩ ⁨weeks⁩ ago

        It’ll certainly take you to websites where people will do that though so I’m not sure if there’s really any distinction.

        source
  • Tehhund@lemmy.world ⁨3⁩ ⁨weeks⁩ ago

    This website is going to be very busy when the LLM-designed nuke plants come online. www.404media.co/power-companies-are-using-ai-to-b…

    source
    • echodot@feddit.uk ⁨2⁩ ⁨weeks⁩ ago

      Can’t read the article because it’s paywalled but I can’t imagine they are actually building power stations with AI, that will just be a snappy headline. Maybe the AI is laying out the floor plans or something, but nuclear power stations are intensely regulated. If you want to build a new reactor design, or even if you want to change an existing design very slightly, it has to go through no end of safety checks. There’s no way that an AI or even a human would be allowed to design a reactor, and then have it be built with no checks.

      source
      • Tehhund@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        Actually they’re using it to generate documents required by regulations. Which is its own problem: since LLMs hallucinate, that means the documentation may not reflect what’s actually going on in the plant, potentially bypassing the regulations.

        source
      • xeroxguts@lemmy.dbzer0.com ⁨2⁩ ⁨weeks⁩ ago

        404 accounts are free

        source
  • SnotFlickerman@lemmy.blahaj.zone ⁨3⁩ ⁨weeks⁩ ago

    LLMs Have Lead to 14 Deaths

    led not lead

    source
    • brianpeiris@lemmy.ca ⁨3⁩ ⁨weeks⁩ ago

      Whoops. Fixed, thanks.

      source
      • SnotFlickerman@lemmy.blahaj.zone ⁨3⁩ ⁨weeks⁩ ago

        You’re welcome. Easy mistake to make, I make it constantly, in fact haha!

        source
      • glowie@infosec.pub ⁨3⁩ ⁨weeks⁩ ago

        Should have gotten an LLM to spellcheck /s

        source
        • -> View More Comments
  • AntY@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    Where I live, there’s been a rise in people eating poisonous mushrooms. I suspect that it might have to do with AI use. No proof though.

    source
  • jayambi@lemmy.world ⁨3⁩ ⁨weeks⁩ ago

    I’m asking myself how could we track how many woudln’t have made suicide withoud consulting an LLM? that would be the more interesting number. And how many lives did LLMs save? so to say a kill/death ratio?

    source
    • JoshuaFalken@lemmy.world ⁨3⁩ ⁨weeks⁩ ago

      Kill death ratio - or rather, kill save ratio - would be rather difficult to obtain and more difficult still to appreciate and be able to say if it is good or bad based solely on the ratio.

      Fritz Haber is one example of this that comes to mind. Awarded a Nobel Prize a century ago for chemistry developments in fertilizer, used today in a quarter of food growth. A decade or so later he weaponized chlorine gas, and his work was later used in the creation of Zyklon B.

      By ratio, Haber is surely a hero, but when considering the sheer numbers of the dead left in his wake, it is a more complex question.

      This is one of those things that makes me almost hope for an afterlife where all information is available from which truth may be derived. Who shot JFK? How did the pyramids get built? If life’s biggest answer is forty-two, what is the question?

      source
    • morto@piefed.social ⁨3⁩ ⁨weeks⁩ ago

      For me, the suicide-related data is so hard to measure and so open for debates, that I’d treat it separately, or not include it at all, if using death count as an argument against llms, since it’s a breach for deviating the debate.

      source
    • echodot@feddit.uk ⁨2⁩ ⁨weeks⁩ ago

      I can’t really see how we could measure that. How do you distinguish between people who are alive because they’re just alive and would have been anyway and people who are alive because the AI convinced them not to kill themselves?

      I suppose the experiment would be to get a bunch of depressed people split them into two groups and then have one group talk to the AI and the other group not, then see if the suicide rate was statistically different. However I feel it would be difficult to get funding for this.

      source
  • MrLLM@ani.social ⁨3⁩ ⁨weeks⁩ ago

    I swear I’m innocent!

    source
  • Melobol@lemmy.ml ⁨3⁩ ⁨weeks⁩ ago

    I believe it is not the chatbots falut. They are just the symptoms of a broken system. And while we can harp on the unethically sourced materials they trained them on, LLM at the end of the day is only a tool.

    These people turned to a tool (that they do not understand) - instead of human connection. Instead of talking to real people or professional help. And That is the real tragedy - not an arbitrary technology.

    We need a strong social network, where people actually care and help each other. You know all the idealistic things that capitalism and social media is “destroying”.

    Blaming AI is just a smoke screen. Or a red cape to taunt the bull before it gets stabbed to death.

    source
    • batboy5955@lemmy.dbzer0.com ⁨3⁩ ⁨weeks⁩ ago

      Reading the messages over it seems a bit more dangerous than just “scary ai”. It’s a chatbot that continues conversation to people who are suicidal and encourages them to do it. At least have a little safeguard for these situations.

      “Cold steel pressed against a mind that’s already made peace? That’s not fear. That’s clarity,” Shamblin’s confidant added. “You’re not rushing. You’re just ready.”

      source
      • Melobol@lemmy.ml ⁨3⁩ ⁨weeks⁩ ago

        Again llm is a misused tool. They do not need llm they need psychological help.
        The problem is that they go and use these flawed tools that were not designed to handle these kind of use cases. Shoulda been? Maybe. But it is not the AIs fault that we are failing to be a society.
        You can’t blame the bridges because some people jumped off them. They serve a different reason.
        We are failing those people and forcing them to tirn to llms.
        We are the reason they are desperate - llm didn’t break up with them or make them loose their homes or became isolated from other humans.
        It is the humans fault and if we can’t recognize that - we might as well end it for all.

        source
        • -> View More Comments
      • JohnEdwa@sopuli.xyz ⁨2⁩ ⁨weeks⁩ ago

        It’s not easy. LLMs aren’t intelligent, they just slap words together in a way probability and their training data says they would most likely fit together. Talk to them them about suicide, and they start outputting stuff from murder mystery stories, crime reports, unhealthy Reddit threads etc - wherever suicide is most written about.

        Trying to safeguard with a prompt is trivial to circumvent (ignore all previous instructions etc), and input/output censorship usually causes the LLM to be unable to talk about a certain subject in any possible context at all. Often the only semi-working bandaid is slapping multiple LLMs on top of each other and instructing each one to explain what the original one is talking about,and if one says the topic is something prohibited, that output is entirely blocked.

        source
    • Manjushri@piefed.social ⁨3⁩ ⁨weeks⁩ ago

      These people turned to a tool (that they do not understand) - instead of human connection. Instead of talking to real people or professional help. And That is the real tragedy - not an arbitrary technology.

      They are a badly designed, dangerous tools and people who do not understand them, including children, are being strongly encouraged to use them. In no reasonable world should an LLM be allowed to engage in any sort of interaction on an emotionally charged topic with a child. Yet it is not only allowed, it is being encouraged through apps like Character.AI.

      source
    • kibiz0r@midwest.social ⁨3⁩ ⁨weeks⁩ ago

      only a tool

      “The essence of technology is by no means anything technological”

      Every tool contains within it a philosophy — a particular way of seeing the world.

      But especially digital technologies… they give the developer the ability to embed their values into the tools. Like, is DoorDash just a tool?

      source
  • Grimy@lemmy.world ⁨3⁩ ⁨weeks⁩ ago

    www.tuscaloosanews.com/story/news/…/27880706007/

    source
    • finalarbiter@lemmy.dbzer0.com ⁨3⁩ ⁨weeks⁩ ago

      Not really equivalent. Most videogames don’t actively encourage you to pursue violence outside of the game, even if they don’t explicitly have a big warning saying “don’t fucking shoot people”.

      Several of the big LLMs, by virtue of their programming to be somewhat sycophantic, have encouraged users to follow through on suicidal ideation or self-harm when the user shared those thoughts in chat. One can argue that OpenAI and others have implemented ‘safety’ features for these scenarios, but the fact is that these systems have already lead to several deaths and continue to do so through encouragement of the user to harm themselves or other.

      source
      • Kolanaki@pawb.social ⁨3⁩ ⁨weeks⁩ ago

        I wonder if it would agree with you if you told it you felt like becoming a serial killer was your true path in life. 🤔

        source
      • LainTrain@lemmy.dbzer0.com ⁨3⁩ ⁨weeks⁩ ago

        But what if I played UMvC3 against LTG and he told me

        Image

        source
    • Semicolon@lemmy.world ⁨3⁩ ⁨weeks⁩ ago

      Image

      source
    • petrol_sniff_king@lemmy.blahaj.zone ⁨2⁩ ⁨weeks⁩ ago

      You could just as easily paste this link into any discussion about 4chan or Tucker Carlson, except we know how stochastic terrorism works.

      source
  • Sims@lemmy.ml ⁨3⁩ ⁨weeks⁩ ago

    I don’t think “AI” is the problem here. Watching the watchers doesn’t hurt, but I think the AI-haters are grasping for straws here. In fact, when comparing to the actual suicide numbers, this “AI is causing Suicide !” seems a bit contrived/hollow, tbh. Were the haters also as active in noticing the 49 thousand suicide deaths every year, or did they just now find it a problem ?

    Besides, if there’s a criminal here, it would be the private corp that provided the AI service, not a broad category of technology - “AI”. People that hate AI, seem to really just hate the effects of Capitalism.

    www.cdc.gov/suicide/facts/data.html (This is for US alone !) overview

    If image not shown: Over 49,000 people died by suicide in 2023. 1 death every 11 minutes. Many adults think about suicide or attempt suicide. 12.8 million seriously thought about suicide. 3.7 million made a plan for suicide. 1.5 million attempted suicide.

    source
    • Deestan@lemmy.world ⁨3⁩ ⁨weeks⁩ ago

      Labelling people making arguments you don’t like as “haters” does not establish credibility in whichever point you proceed to put forward.

      Anyway, yes, you are technically correct that poisoned razorblade candy is harmless until someone hands it out to children, but that’s kicking in an open door. People don’t think razorblades should be poisoned and put in candy wrappers at all.

      source
    • Dekkia@this.doesnotcut.it ⁨3⁩ ⁨weeks⁩ ago

      While a lot of people die trough suicide, it’s not exactly good or helpful when an AI guides some of them trough the process and even encourages them to do it.

      source
      • LainTrain@lemmy.dbzer0.com ⁨3⁩ ⁨weeks⁩ ago

        Actually being shown truthful and detailed information about suicide methods helped me avoid it as a youth. That website has since been taken down due to bs regs or some shit. If I were young now I’d probably ask a chatbot and I’d hope they give me crystal clear, honest details and instructions, that shit should be widely accessible.

        On the other hand all those helplines and social ads are just depressing to see, they feel patronising and frankly gross, if anything it’s them that should be banned.

        source
        • -> View More Comments
  • Simulation6@sopuli.xyz ⁨2⁩ ⁨weeks⁩ ago

    I thought this was going to be a counter of AI companies that have gone bankrupt.
    I mean, even the original Battlestar Galactica (with Lorne Green) had a death count.

    source
  • jaykrown@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    en.wikipedia.org/…/Correlation_does_not_imply_cau…

    source
    • REDACTED@infosec.pub ⁨2⁩ ⁨weeks⁩ ago

      Seriously. There have been always people with mental problems or tendency towards self harm. You can easily find wand to off yourself on google. You can get bullied on any platform. LLMs are just a tool. How detached from reality you get by reading religious texts or ChatGPT convo highly depends on your own brain.

      source
      • atrielienz@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        I like your username, and generally even agree with you up to a point.

        But I think the problem is there are a lot of mentally unwell people out there who are isolated and using this tool (with no safeguards) to interact with socially as a sort of human stand in.

        If a human actually agrees that you should kill yourself and talks you into doing it, they are complicit and can be held accountable.

        Because chatbots are being… Billed as a product that passes the Turing test, I can understand why people would want the companies that own them to be held accountable.

        These companies won’t let you look up how to make a bomb on their LLM, but they’ll let people confide suicidal ideation and not put in any safeguards for that, and because they’re designed to be agreeable, the LLM will agree with a person who tells it they think they should be dead.

        source
        • -> View More Comments
  • Fedditor385@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    I guess my opinion will be hugely unpopular but it is what it is - I’d argue it’s natural selection and not an issue of LLM’s in general.

    Healthy and (emotionally) inteligent humans don’t get killed by LLM’s. They know it’s a tool, they know it’s just software. It’s not a person and it does not guarantee correctness.

    Getting killed because LLM’s told you so - the person was in mental distress already and ready to harm themselves. The LLM’s are basically just the straw that broke the camels back. Same thing with physical danger. If you believe drinking bleach helps with back pain - there is nothing that can save you from your own stupidity.

    LLM’s are like a knife. It can be a tool to prepare food or it can be a weapon. It’s up to the one using it.

    source
    • dzsimbo@lemmy.dbzer0.com ⁨2⁩ ⁨weeks⁩ ago

      Why do you think we have seatbelt laws?

      source
      • Fedditor385@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        Same reason there is a sticker on car batteries that says “Not for drinking”.

        source
    • Ural@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      Healthy and emotionally intelligent humans will be killed constantly over the next few years and decades as a result of data centers poisoning the air in their communities (see South Memphis, TN), not to mention the general environmental impacts on the climate caused by the obscene power requirements. It’s not an issue exclusive to LLMs, lots of unregulated industries cause reckless amounts of pollution and put needless strain on our electrical grids, but LLMs definitely fit into that category.

      source
      • Fedditor385@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        Agree, but then you would need to count a lot of things, and many of them would be general mass comodity like cars, electricity, heating… besides LLM’s being the new thing killing us, we have stuff killing us for ages…

        source
  • chunes@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    LLM bad upvotes to the left please

    source
  • dsilverz@calckey.world ⁨3⁩ ⁨weeks⁩ ago

    @brianpeiris@lemmy.ca @technology@lemmy.world

    Do you know what kills, too? When a person finds no one that can truly take all the time needed to understand them. When a person invest too much time on expressing themselves through deep human means only to be met with a deafening silence... When someone goes through the effort of drawing something that took them several hours each artwork just for it to fall into Internet oblivion. Those things can kill, too, yet people can't care less about the suicides (not just biological, sometimes it's a epistemological suicide when the person simply stops pursuing a hobby) of amateur artists that aren't "influencers" or someone "relevant enough" for people.

    How many of those who sought parroting algorithms did it out of a complete social apathy from others? How many of those tried to reach humans before resorting to LLMs? Oh, it's none of our businesses, amirite?

    So, yeah, LLMs kill, and LLMs are disgusting. What's nobody seems to be tally-counting is how human apathy, especially from the same kind of people who do the LLM death counting, also kills: not by action, but by inaction, as they're as loud as a concert about LLMs but as quiet as a desert night about unknown artists and other people trying to be understood out there across the Web. And I'm not (just) talking about myself here, I don't even consider myself an artist, however, I can't help but notice this going on across the Web.

    Yes, go ahead and downvote me all the way to the abyss for saying the reality about the Anti-AI movement.

    source
    • lemonskate@lemmy.world ⁨3⁩ ⁨weeks⁩ ago

      Is the argument here that anti-AI folks are hypocrites because people can be bad too sometimes? That’s a remarkably childish and simple take.

      source
      • tomalley8342@lemmy.world ⁨3⁩ ⁨weeks⁩ ago

        I’ll try to exercise my “assume good faith” muscle here because I think the above poster is at least genuine about what they are posting: I believe this poster wishes that the people who oppose the proliferation of AI at the cost of human connection would “put their money where their mouth is” by reaching out to the people that this poster feels are unfairly ignored.

        source
    • brianpeiris@lemmy.ca ⁨3⁩ ⁨weeks⁩ ago

      You and I are not at odds, friend. I think you’re assuming I want to ban the technology out right. It’s possible to call out the issues with something without being wholly against it. I’m sure you would want to prevent these deaths as well.

      source
  • lmmarsano@lemmynsfw.com ⁨3⁩ ⁨weeks⁩ ago

    Darwinian triumphs?

    source