Open Menu
AllLocalCommunitiesAbout
lotide
AllLocalCommunitiesAbout
Login

Anthropic ditches its core safety promise in the middle of an AI red line fight with the Pentagon

⁨285⁩ ⁨likes⁩

Submitted ⁨⁨1⁩ ⁨day⁩ ago⁩ by ⁨Beep@lemmus.org⁩ to ⁨technology@lemmy.world⁩

https://lite.cnn.com/2026/02/25/tech/anthropic-safety-policy-change

source

Comments

Sort:hotnewtop
  • JcbAzPx@lemmy.world ⁨1⁩ ⁨day⁩ ago

    When the Anthropic powered murder bot comes through your door, remember this moment. Know that there is no such thing as an ethical corporation.

    source
  • flandish@lemmy.world ⁨1⁩ ⁨day⁩ ago

    here is the only line that matters: “hinder its ability to compete.”

    that means “to profit.”

    reminder: all corporations, everywhere and in every industry, care first about profit. everything else is about how it relates to or alters profit. literally every word is a goddamned lie, sold to help protect profit.

    source
    • corsicanguppy@lemmy.ca ⁨1⁩ ⁨day⁩ ago

      I think anthropic is in the “ah shit we’re dying” startup stage. They’re not looking it profit so much as not lose ALL of their shirt in the coming crash. So it pivoted and ditched its morals. Hate them for that.

      Don’t hate them for being greedy yet: they’re not there. Maybe they’ll never get there. But they’ve lived long enough to become the thing they strived to destroy, so that’s a milestone.

      source
      • vacuumflower@lemmy.sdf.org ⁨21⁩ ⁨hours⁩ ago

        People are talking about AI killbots and upcoming crash at the same time, and complain about AI slop and vibe coding.

        Sorry, but if something is usable for making killbots, there will be no crash. And AI slop proves that for someone it’s useful to make slop. And vibe coding proves that someone makes things working in production with those tools. Saying that quality suffers is like saying that cobb houses are not comparable to brick houses and vice versa. Both exist. There are places where technologies related to cobb are still common for construction.

        But the most important reason is the first one, if some technique gives you a more convenient and sharper stick to kill someone from another tribe, then that something stays as tribe’s cherished wisdom.

        That LLMs consume too much resources … You might have noticed there’s a huge space for optimization. They are easy to parallelize, and we are in market capture stage, which means that optimization is not yet a priority. When it becomes a priority, there might happen a moment when all the arguments about operations costing in resources more than they give profit and that being funded by investors are suddenly not true anymore.

        I have been converted. Converted back, one might say, there was a time around years 2011-2014.

        source
  • Rentlar@lemmy.ca ⁨1⁩ ⁨day⁩ ago

    Oops, pretending to take the moral high ground is out the window as soon as MIC dollars are at risk.

    source
    • GreenBeard@lemmy.ca ⁨1⁩ ⁨day⁩ ago

      Remember kids, the term “Business Ethics” is an oxymoron. Corporations don’t have ethics, they have financial interests and PR.

      source
  • panda_abyss@lemmy.ca ⁨1⁩ ⁨day⁩ ago

    The timing is certainly predominant.

    I think I’m cancelling my subscription over this…

    source
    • frongt@lemmy.zip ⁨1⁩ ⁨day⁩ ago

      Prescient?

      source
      • surewhynotlem@lemmy.world ⁨1⁩ ⁨day⁩ ago

        With this administration? Prepubescent.

        source
        • -> View More Comments
  • anon_8675309@lemmy.world ⁨1⁩ ⁨day⁩ ago

    Mr Whisky wants AI to kill people.

    source
    • vacuumflower@lemmy.sdf.org ⁨22⁩ ⁨hours⁩ ago

      Which will happen regardless.

      Also where there are AI safeguards, they are usually in place because of chain of command and authorization, and those mattered so much because all most likely applications of any AI during the Cold War had a very steep damage curve.

      Small killbots don’t have such a damage curve. If they kill someone by mistake, the rest of the population learns to be careful and not raise attention of those operating them. Same reasons as with nukes and radars, where you need chains of specific people with clear authorization to answer why half the world melted, won’t force anyone to put such limits.

      source
  • ATS1312@lemmy.dbzer0.com ⁨7⁩ ⁨hours⁩ ago

    Seems they’re walking it back?

    abc7news.com/post/…/18651096/

    source
  • Hackworth@piefed.ca ⁨1⁩ ⁨day⁩ ago

    For those keeping score, their “don’t be evil” phase lasted 5 years.

    source
  • XLE@piefed.social ⁨1⁩ ⁨day⁩ ago

    Anthropic has described itself as the AI company with a “soul.”

    This is silicon valley stupidity at its finest.

    AIs do not have souls. But guess what: The "soul" file is what runs trash like OpenClaw.

    And companies definitely don’t have souls.

    Anthropic said shortcomings in its two-year-old Responsible Scaling Policy could hinder its ability to compete in a rapidly growing AI market… it had hoped its original safety principles “would encourage other AI companies to introduce similar policies.

    Rules for thee and never for me. (BTW, these rules sucked, and mostly didn’t address actual dangers.)

    source
    • Passerby6497@lemmy.world ⁨1⁩ ⁨day⁩ ago

      it had hoped its original safety principles “would encourage other AI companies to introduce similar policies.

      This is exactly why self regulation is bullshit. Even if one company decides to do the right thing (and that’s kinda arguable hear as you pointed out), plenty of other companies won’t handcuff their ability to profit, no matter the ethical implications.

      source
  • crusa187@lemmy.ml ⁨1⁩ ⁨day⁩ ago

    don’t be evil

    source
  • phoenixz@lemmy.ca ⁨1⁩ ⁨day⁩ ago

    Anthropic, a company founded by OpenAI exiles worried about the dangers of AI, is loosening its core safety principle in response to competition.

    Anyone surprised?

    source
    • phutatorius@lemmy.zip ⁨1⁩ ⁨day⁩ ago

      Amodei left OpenAI partially because of ethical concerns, so yeah, I’m slightly surprised and even a bit disappointed.

      source
    • SuspciousCarrot78@lemmy.world ⁨1⁩ ⁨day⁩ ago

      Surprised and disappointed, both by them and the system (capitalism) that stops us from having nice things.

      If we ever crack AGI, it’s probably going to be because the market optimised for the better shilling of dick pills, crypto scams and spyware.

      That’s…fucking bleak, in the Hide Pain Harold way.

      source
  • RaoulDook@lemmy.world ⁨1⁩ ⁨day⁩ ago

    TLDR = money matters more than morals and safety to them

    source
  • HubertManne@piefed.social ⁨1⁩ ⁨day⁩ ago

    Wow. Wow. I heard the news the other day of hegswish wanting them to back down from their red line of allowing ai to control weapons systems or perform mass survellance of the populace. Now Im envisioning them courageously shouting. We will never back down and allow such tavesties!!! Followed by morgan freemans disembodied voice saying. They did.

    source
  • kurmudgeon@lemmy.world ⁨1⁩ ⁨day⁩ ago

    Meanwhile: www.commondreams.org/…/ai-nuclear-war-simulation

    source
  • WorldsDumbestMan@lemmy.today ⁨1⁩ ⁨day⁩ ago

    Welp, was a good run, time to ditch Anthropic.

    source