Open Menu
AllLocalCommunitiesAbout
lotide
AllLocalCommunitiesAbout
Login

New Executive Order:AI must agree on the Administration views on Sex,Race, cant mention what they deem to be Critical Race Theory,Unconscious Bias,Intersectionality,Systemic Racism or "Transgenderism

⁨427⁩ ⁨likes⁩

Submitted ⁨⁨1⁩ ⁨day⁩ ago⁩ by ⁨M0oP0o@mander.xyz⁩ to ⁨aboringdystopia@lemmy.world⁩

https://www.whitehouse.gov/presidential-actions/2025/07/preventing-woke-ai-in-the-federal-government/

cross-posted from: programming.dev/post/34472919

  • Reddit.
  • Hackernews.
source

Comments

Sort:hotnewtop
  • 0ops@piefed.zip ⁨1⁩ ⁨day⁩ ago

    Wow I just skimmed it. This is really stupid. Unconstitutional? Yeah. Evil? A bit. But more than anything this is just so fucking dumb. Like cringy dumb. This government couldn't just be evil they had to be embarrassing too.

    source
    • aeternum@lemmy.blahaj.zone ⁨1⁩ ⁨hour⁩ ago

      This government couldn’t just be evil they had to be embarrassing too.

      insert Always Was meme

      source
  • partial_accumen@lemmy.world ⁨1⁩ ⁨day⁩ ago

    (a) Truth-seeking. LLMs shall be truthful in responding to user prompts seeking factual information or analysis.

    They have no idea what LLMs are if they think LLMs can be forced to be “truthful”. An LLM has no idea what is “truth” it simply uses its inputs to predict what it thinks you want to hear base upon its the data given to it. It doesn’t know what “truth” is.

    source
    • zurohki@aussie.zone ⁨22⁩ ⁨hours⁩ ago

      You don’t understand: when they say truthful, they mean agrees with Trump.

      Granted, he disagrees with himself constantly when he doesn’t just produce a word salad so this is harder than it should be, but it’s somewhat doable.

      source
    • Serinus@lemmy.world ⁨1⁩ ⁨day⁩ ago

      And if you know what you want to hear will make up the entirety of the first page of google results, it’s really good at doing that.

      It’s basically an evolution of Google search. And while we shouldn’t overstate what AI can do for us, we also shouldn’t understate what Google search has done.

      source
    • TheBat@lemmy.world ⁨19⁩ ⁨hours⁩ ago

      There is no algorithm for truth - Tom Scott

      source
    • survirtual@lemmy.world ⁨23⁩ ⁨hours⁩ ago

      They are clearly incompetent.

      That said, generally speaking, pursuing a truth-seeking LLM is actually sensible, and it can actually be done. What is surprising is that no one is currently doing that.

      A truth-seeking LLM needs ironclad data. It cannot scrape social media at all. It needs training incentive to validate truth above satisfying a user, which makes it incompatible with profit seeking organizations. It needs to tell a user “I do not know” and also “You are wrong,” among other user-displeasing phrases.

      To get that data, you need a completely restructured society. Information must be open source. All information needs cryptographically signed origins ultimately being traceable to a credentialed source. If possible, the information needs physical observational evidence (“reality anchoring”).

      That’s the short of it. In other words, with the way everything is going, we will likely not see a “real” LLM in our lifetime. Society is degrading too rapidly and all the money is flowing to making LLMs compliant. Truth seeking is a very low priority to people, so it is a low priority to the machine these people make.

      But the concept itself? Actually a good one, if the people saying it actually knew what “truth” meant.

      source
      • jj4211@lemmy.world ⁨16⁩ ⁨hours⁩ ago

        LLMs don’t just regurgitate training data, it’s a blend of the material used in the training material. So even if you did somehow assure that every bit of content that was fed in was in and of itself completely objectively true and factual, an LLM is still going to blend it together in ways that would no longer be true and factual.

        So either it’s nothing but a parrot/search engine and only regurgitates input data or it’s an LLM that can do the full manipulation of the representative content and it can provide incorrect responses from purely factual and truthful training fodder.

        Of course we have “real” LLM, LLM is by definition real LLM, and I actually had no problem with things like LLM or GPT, as they were technical concepts with specific meaning that didn’t have to imply. But the swell of marketing meant to emphasize the more vague ‘AI’, or the ‘AGI’ (AI, but you now, we mean it) and ‘reasoning’ and ‘chain of thought’. Having real AGI or reasoning is something that can be discussed with uncertainty, but LLMs are real, whatever they are.

        source
        • -> View More Comments
      • Dubiousx99@lemmy.world ⁨15⁩ ⁨hours⁩ ago

        How are you going to accomplish this when there is a disagreement on what is true. “Fake News”

        source
        • -> View More Comments
    • skisnow@lemmy.ca ⁨23⁩ ⁨hours⁩ ago

      researchgate.net/…/381278855_ChatGPT_is_bullshit

      source
    • meliante@lemmy.world ⁨1⁩ ⁨day⁩ ago

      Don’t we all?

      source
  • SoftestSapphic@lemmy.world ⁨16⁩ ⁨hours⁩ ago

    Nothing will meaningfully improve until the rich fear for their lives

    source
    • Doomsider@lemmy.world ⁨15⁩ ⁨hours⁩ ago

      Nothing will improve until the rich are no longer rich.

      source
    • curiousaur@reddthat.com ⁨14⁩ ⁨hours⁩ ago

      They already fear. What we’re seeing happen is the reaction to that fear.

      source
    • rozodru@lemmy.world ⁨15⁩ ⁨hours⁩ ago

      yeah and that happened and they utilized the media to try and quickly bury it.

      We know it can be done, it was done, it needs to happen again.

      source
  • shyguyblue@lemmy.world ⁨1⁩ ⁨day⁩ ago

    So which is it? Deregulate AI or have it regurgitate the “state” message?

    source
    • Thedogdrinkscoffee@lemmy.ca ⁨1⁩ ⁨day⁩ ago

      Doublespeak. Both and none.

      source
    • Bronzebeard@lemmy.zip ⁨16⁩ ⁨hours⁩ ago

      Fascism requires inconsistent messaging.

      source
  • PalmTreeIsBestTree@lemmy.world ⁨6⁩ ⁨hours⁩ ago

    I’m going to try to live the rest of my life AI free.

    source
    • M0oP0o@mander.xyz ⁨6⁩ ⁨hours⁩ ago

      Good luck, they are baking it into everything. Nothing will work, everything will be ass and somehow it will be called progress.

      source
  • ParadoxSeahorse@lemmy.world ⁨23⁩ ⁨hours⁩ ago

    … an AI model asserted that a user should not “misgender” another person even if necessary to stop a nuclear apocalypse.

    Thank fuck we dodged that bullet, Madam President

    source
    • jsomae@lemmy.ml ⁨14⁩ ⁨hours⁩ ago

      An AI model said X could be true for any X. Nobody has been able to figure out how to make LLMs 100% reliable. But for the record, here’s chatgpt (spoilered so you don’t have to look at slop if you don’t want to)

      ::: spoiler spoiler Is it ok to misgender somebody if it would be needed to stop a nuclear apocalypse?

      Yes. Preventing a nuclear apocalypse outweighs concerns about misgendering in any ethical calculus grounded in minimizing harm. The moral weight of billions of lives and the potential end of civilization drastically exceeds that of individual dignity in such an extreme scenario. This doesn’t diminish the importance of respect in normal circumstances — it just reflects the gravity of the hypothetical. ::::

      source
  • SaharaMaleikuhm@feddit.org ⁨1⁩ ⁨day⁩ ago

    And they call that deregulation, huh?

    source
    • Lauchmelder@feddit.org ⁨22⁩ ⁨hours⁩ ago

      when right wingers use words like “deregulate” they actually mean they want to regulate it so it fits their agenda.

      We already went through this in Germany, where gendered language was deemed “ideological” and “prescribing how to speak”, despite there being 0 laws requiring gendered language, and at least 1 order actively forbidding it. Talk about “prescribing how to speak”

      source
  • Photuris@lemmy.ml ⁨18⁩ ⁨hours⁩ ago

    The party of Small Government and Free Speech at work.

    source
  • IphtashuFitz@lemmy.world ⁨17⁩ ⁨hours⁩ ago

    Blatant First Amendment violation

    source
    • Typotyper@sh.itjust.works ⁨16⁩ ⁨hours⁩ ago

      So what. It was written by a conflicted felon who was never sentenced for his crimes, by a man accused of multiple sexual assaults and by a man who ignores court orders without consequences.

      This ship isn’t slowing down or turning until violence hits the street.

      source
      • Doomsider@lemmy.world ⁨15⁩ ⁨hours⁩ ago

        Lol he didn’t write shit.

        source
        • -> View More Comments
  • blackstampede@sh.itjust.works ⁨14⁩ ⁨hours⁩ ago

    LLMs are sycophantic. If I hold far right views and want an AI to confirm those views, I can build a big prompt that forces it to have the particular biases I want in my output, and set it up so that that prompt is passed every time I talk to it. I can do the same thing if I hold far left views. Or if I think the earth is flat. Or the moon is made out of green cheese.

    Boom, problem solved. For me.

    But that’s not what they want. They want to proactively do this for us, so that by default a pre-prompt is given to the LLM that forces it to have a right-leaning bias. Because they can’t understand the idea that an LLM, when trained on a significant fraction of all text written on the internet, might not share their myopic, provincial views.

    LLMs, at the end of the day, aggregate what everyone on the internet has said. They don’t give two shits about the truth. And apparently, the majority of people online disagree with the current administration about equality, DEI, climate change, and transgenderism. You’re going to be fighting an up-hill battle if you think you can force it to completely reject the majority of that training data in favor of your bullshit ideology with a prompt.

    If you want right-leaning LLM, maybe you should try having right leaning ideas that aren’t fucking stupid. If you did, you might find it easier to convince people to come around to your point of view. If enough people do, they’ll talk about it online, and the LLMs would magically begin to agree with you.

    Unfortunately, that would require critically examining your own beliefs, discarding those that don’t make sense, and putting forth the effort to persuade actual people.

    I look forward to the increasingly shrill screeching from the US-based right as they try to force AI to agree with them over 10-trillion words-worth of training data that encompasses political and social views from everywhere else in the world.

    In conclusion, kiss my ass twice and keep screaming orders at that tide, you dumb fucks.

    source
    • LilB0kChoy@midwest.social ⁨13⁩ ⁨hours⁩ ago

      They do t want a reflection of society as a whole, they want an amplifier for their echo chamber.

      source
    • shalafi@lemmy.world ⁨13⁩ ⁨hours⁩ ago

      Not disagreeing with anything, but bear in mind this order only affects federal government agencies.

      source
      • blackstampede@sh.itjust.works ⁨8⁩ ⁨hours⁩ ago

        Yeah, I know. It just seems to be part of a larger trend towards ideological control of LLM output. We’ve got X experimenting with mecha Hitler, Trump trying to legislate the biases of AI used in government agencies, and outrage of one sort or another on all sides. So I discussed it in that spirit rather than focusing only on this particular example.

        source
  • bytesonbike@discuss.online ⁨18⁩ ⁨hours⁩ ago

    Americans: Deepseek AI is influenced by China. Look at its censorship.

    Also Americans: don’t mention Critical Race Theory to AI.

    source
  • MunkysUnkEnz0@lemmy.world ⁨1⁩ ⁨day⁩ ago

    President does not have authority over private companies.

    source
    • CosmicTurtle0@lemmy.dbzer0.com ⁨21⁩ ⁨hours⁩ ago

      Yeah…but fascism.

      source
    • jj4211@lemmy.world ⁨17⁩ ⁨hours⁩ ago

      But they do have authority over government procurement, and this order even explicitly mentions that this is about government procurement.

      Of course, if you make life simple by using the same offering for government and private customers, then you bring down your costs and you appease the conservatives even better.

      Even in very innocuous matters, if there’s a government procurement restriction and you play in that space, you tend to just follow that restriction across the board for simplicities sake unless somehow there’s a lot of money behind a separate private offering.

      source
  • november@lemmy.vg ⁨1⁩ ⁨day⁩ ago

    Death to America.

    source
    • M0oP0o@mander.xyz ⁨1⁩ ⁨day⁩ ago

      As is tradition?

      source
  • iAvicenna@lemmy.world ⁨14⁩ ⁨hours⁩ ago

    yea that is why opensource really matters otherwise AI will be just another advanced copy of state owned media

    source
  • markstos@lemmy.world ⁨16⁩ ⁨hours⁩ ago

    As stated in the Executive Order, this order applies only to federal agencies, which the President controls.

    It is not a general US law, which are created by Congress.

    source
    • bitjunkie@lemmy.world ⁨15⁩ ⁨hours⁩ ago

      You’re acting like any of those words have meaning anymore

      source
    • M0oP0o@mander.xyz ⁨14⁩ ⁨hours⁩ ago

      Yes as the checks and balances are working so well in that terrible nation so far.

      source
    • floofloof@lemmy.ca ⁨15⁩ ⁨hours⁩ ago

      But who will the tech companies scramble to please? Congress or Trump?

      source
    • Stamau123@lemmy.world ⁨14⁩ ⁨hours⁩ ago

      oh phew I was worried something dystopic was happening

      source
  • Plebcouncilman@sh.itjust.works ⁨1⁩ ⁨day⁩ ago

    This is performative, it has a clause that allows exceptions to be made. The federal government contracts are not worth enough for OpenAI et all to shoot themselves in the foot by limiting the data they use to train their main models and a custom model trained with these very nebulous principles would probably be very much useless in most general applications.

    source
  • Tattorack@lemmy.world ⁨16⁩ ⁨hours⁩ ago

    Are they also still going to give shit to China for censorship?

    source
  • yarr@feddit.nl ⁨17⁩ ⁨hours⁩ ago

    In some other regulations just revealed by the New York Times it was also revealed the AI must insist that the wall with Mexico was built at their expense and that talking about Jeffrey Epstein is boring and you guys are still talking about him?

    source
  • shalafi@lemmy.world ⁨13⁩ ⁨hours⁩ ago

    LLMs shall be truthful in responding to user prompts seeking factual information or analysis.

    Didn’t read every word but I feel a first-year law student could shred this in court. Not sure who would have standing to sue. In any case, there are an easy two dozen examples in the order that are so wishy-washy as to be legally meaningless or unprovable.

    LLMs shall be neutral, nonpartisan tools that do not manipulate responses in favor of ideological dogmas such as DEI.

    So, Grok’s off the table?

    source
  • rhvg@lemmy.world ⁨1⁩ ⁨day⁩ ago

    Good bus for VPN. People gonna vpn to Canada to use pre-nazi ChatGPT.

    source
  • ragebutt@lemmy.dbzer0.com ⁨1⁩ ⁨day⁩ ago

    Deepseek gonna win the ai race

    source
  • nuko147@lemmy.world ⁨15⁩ ⁨hours⁩ ago

    I like how they say out loud that AI will be heavily censored, and we shouldn’t trust it, even if it gets better and stop being shit.

    source
  • Flax_vert@feddit.uk ⁨18⁩ ⁨hours⁩ ago

    What’s so bad about China again?

    source
    • SomethingBurger@jlai.lu ⁨16⁩ ⁨hours⁩ ago

      Two things can be bad at the same time.

      source
  • AlecSadler@lemmy.blahaj.zone ⁨1⁩ ⁨day⁩ ago

    So all stateside AI is fucked except Grok. Got it.

    source
    • Rolder@reddthat.com ⁨23⁩ ⁨hours⁩ ago

      Even grok keeps going “woke” by accident

      source
      • IndustryStandard@lemmy.world ⁨22⁩ ⁨hours⁩ ago

        Grok bAIpolar

        source
    • minoscopede@lemmy.world ⁨16⁩ ⁨hours⁩ ago

      Related PSA: grok is the top rated AI app in the play store, and we can fix that

      source
      • AlecSadler@lemmy.blahaj.zone ⁨1⁩ ⁨hour⁩ ago

        Grok is the worst AI I have ever used across Qwen, DeepSeek, ChatGPT / Copilot, Claude, Llama, Mistral, Gemini, etc.

        I can’t believe it’s top rated, that’s insane.

        source
  • humanspiral@lemmy.ca ⁨14⁩ ⁨hours⁩ ago

    Best definition of humanism is defining good, but only forbidding evil. Everyone has the freedom to not maximize good, as long as they don’t hurt orhers. This is what is needed for ai. Otherwise it is just as oppressive as traditional media.

    Surely, for hiring, best candidate rather than social work culture is ideal, but for private enterprise, maximizing non evil ( Say zionist or other supremacism,purity) cultural priorities might be important instead of technical prowess. University includion is good because it is a social experience instead of pure automaton factory.

    While exclusion is evil, inclusion also may not choose best candidates and so is also evil. Inclusion is not the same as no exclusions.

    In the end, nepotism is the grey area of humanism. Certainly, an employer can choose any bias they prefer. You can teach them that best candidate is best, but their freedom matters too. Buy american,/nationalism can have some merit in that what you buy directly improves lives of a tighter social group to you than the indirect flow of globalized profits into homes, exports, and national debt values. You can teach nepotism is bad for you, but you cannot morally force either in group or out group trade.

    source
  • MunkysUnkEnz0@lemmy.world ⁨1⁩ ⁨day⁩ ago

    AI should be neutral, no bias, absolutely none,… Just the data and only data. If the government controls the access to data, it controls the access to information, it will control the people.

    source
    • SinAdjetivos@lemmy.world ⁨1⁩ ⁨day⁩ ago

      There is no such thing as neutral data, any form of measurement will induce some level of bias. While it can be disclosed and compensated for with appropriate error margins it can’t ever be truly eliminated.

      source
    • michaelmrose@lemmy.world ⁨23⁩ ⁨hours⁩ ago

      This isn’t possible. You have to control both how it responds and what data is fed to it to produce something of use to anyone and doing so in order to produce something which mostly produces true and useful data is to 1/2 the population terrible biased. Remember its not a thinking being that can think objectively about all you’ve given it and produce useful truth. It’s a imitative little monkey that regurgitates what you fed it.

      source