Open Menu
AllLocalCommunitiesAbout
lotide
AllLocalCommunitiesAbout
Login

Largest study of its kind shows AI assistants misrepresent news content 45% of the time – regardless of language or territory

⁨612⁩ ⁨likes⁩

Submitted ⁨⁨1⁩ ⁨day⁩ ago⁩ by ⁨sqgl@sh.itjust.works⁩ to ⁨technology@lemmy.world⁩

https://www.bbc.co.uk/mediacentre/2025/new-ebu-research-ai-assistants-news-content

source

Comments

Sort:hotnewtop
  • wasabi@lemmy.world ⁨1⁩ ⁨day⁩ ago
    [deleted]
    source
    • greenbit@lemmy.zip ⁨1⁩ ⁨day⁩ ago

      The fascist social media influencers are already pushing generated bodycam and surveillance videos for xenophobia etc. A large enough mass of the population doesn’t know what’s real and that’s the goal

      source
      • wasabi@lemmy.world ⁨1⁩ ⁨day⁩ ago
        [deleted]
        source
        • -> View More Comments
  • jordanlund@lemmy.world ⁨1⁩ ⁨day⁩ ago

    I wish they had broke it out by AI. The article states:

    “Gemini performed worst with significant issues in 76% of responses, more than double the other assistants, largely due to its poor sourcing performance.”

    But I don’t see that anywhere in the linked PDF of the “full results”.

    This sort of study should also be re-done from time to time to track AI version numbers.

    source
    • Rothe@piefed.social ⁨1⁩ ⁨day⁩ ago

      It doesn’t really matter, “AI” is being asked to do a task it was never meant to do. It isn’t good at it, and it will never be good at it.

      source
      • snooggums@piefed.world ⁨1⁩ ⁨day⁩ ago

        Using an LLM to return accurate information is like using a shoe to hammer a nail.

        source
        • -> View More Comments
      • Cocodapuf@lemmy.world ⁨18⁩ ⁨hours⁩ ago

        Wow, way to completely ignore the content of the comment you’re replying to. Clearly, some are better than others… so, how do the others perform? It’s worth knowing before we make assertions.

        The excerpt they quoted said:

        “Gemini performed worst with significant issues in 76% of responses, more than double the other assistants, largely due to its poor sourcing performance.”

        So that implies that “the other assistants” performed more than twice as well, so presumably that means encountering serious issues less than 38% of the time (still not great, but better). But they said “more than double the other assistants”, does that mean double the rate of one of the others or double the average of the others? If it’s an average it would mean that some models probably performed better, while others performed worse.

        This was the point, what was reported was insufficient information.

        source
    • nick@campfyre.nickwebster.dev ⁨15⁩ ⁨hours⁩ ago

      And also which version of the models. Gemini 2.5 Flash is a completely different experience to 2.5 Pro.

      source
  • SaraTonin@lemmy.world ⁨1⁩ ⁨day⁩ ago

    There’s a few replies talking about humans misrepresenting the news. This is true, but part of the problem here is that most people understand the concept of bias - even if only to the extent of “my people neutral, your people biased”. But this is less true for LLMs. There’s research which shows that because LLMs present information authoritatively that not only do people tend to trust them, but they’re actually less likely to check the sources that the LLM provides than they would be with other forms of being presented with information.

    And it’s not just news. I’ve seen people seriously argue that fringe pseudo-science is correct because they fed a very leading prompt into a chatbot and got exactly the answer they were looking for.

    source
    • Axolotl_cpp@feddit.it ⁨11⁩ ⁨hours⁩ ago

      I hear a lot of people say “let’s ask chatGPT” like the AI is god and know everthing 🙏, that’s a big problem to be honest

      source
    • Best_Jeanist@discuss.online ⁨1⁩ ⁨day⁩ ago

      I wonder if people trust ChatGPT more or less than an international celebrity who is also their best friend.

      source
  • Yerbouti@sh.itjust.works ⁨1⁩ ⁨day⁩ ago

    I dont understand the use people make of AI. I know a lot of of professionnal composer who are like “That’s awesome, AI does the music for me now!” and I’m like, cool, now you only have the boring part of the job to do since the fun part was made by AI. Creating the music is litteraly the only fun part, I hate everything around it.

    source
    • balsoft@lemmy.ml ⁨23⁩ ⁨hours⁩ ago

      It’s a word predictor. It is good at simple text processing. Think local code refactoring, changing the style or structure of a small text piece, or summarizing small text pieces into even smaller text pieces. It is ok at synthesizing new text that has similar structure to the training corpus. Think generating repetitive boilerplate or copywriting. It is very bad at recalling or checking facts, logic, mathematics, and everything else that people seem to be using it for nowadays.

      source
      • Amir@lemmy.ml ⁨22⁩ ⁨hours⁩ ago

        The AI creating music is not an LLM

        source
        • -> View More Comments
  • oplkill@lemmy.world ⁨18⁩ ⁨hours⁩ ago

    Replace CEOs by AI

    source
  • morrowind@lemmy.ml ⁨21⁩ ⁨hours⁩ ago

    “misrepresent” is a vague term. Actual graph from the study

    Image

    The main issue is usual… sources. AI is bad at sources without a proper pipeline. They note that Gemini is the worst at 72%.

    Note, they’re not testing models with their own pipeline. They’re testing other people’s products. This is more indicative of the product design than the actual models

    source
    • davidagain@lemmy.world ⁨15⁩ ⁨hours⁩ ago

      This graph clearly shows that AI is also shockingly bad at factual accuracy and at telling a news story in such a way that someone who didn’t already know about it to understand the issues and context. I think you’re misrepresenting this graph as being bad about sources, but here’s a better summary of the point you seem to be making:

      AI’s summaries don’t match their source data.

      So actually, the headline is pretty accurate in calling it misrepresentation.

      source
  • danc4498@lemmy.world ⁨23⁩ ⁨hours⁩ ago

    Makes sense. I have used AI for software development tasks such as manipulating SQL queries and XML files (tedious things) and am always disappointed with how AI will misinterpret some things. But it’s obvious with those when the requests fail. But for things like “the news” where there is no QA team to point out the defect, it will be much harder to notice. And when AI starts (or continues) to use AI generated posts as sources, it will get much worse.

    source
  • sin_free_for_00_days@sopuli.xyz ⁨11⁩ ⁨hours⁩ ago

    Could be better, but still a huge step up from the hate rhetoric magats get spoon fed 24/7 from Fox and friends.

    source
  • NotMyOldRedditName@lemmy.world ⁨21⁩ ⁨hours⁩ ago

    I’ve had someone else’s AI summarize some content I created elsewhere, and it got it incredibly wrong to the point of changing the entire meaning of my original content.

    source
  • Kissaki@feddit.org ⁨1⁩ ⁨day⁩ ago

    Will they change their disclaimer now, from “can be wrong” to “is often wrong”? /s

    source
  • MonkderVierte@lemmy.zip ⁨1⁩ ⁨day⁩ ago

    Parrot is wrong almost half of the time. Who knew?

    source
    • altphoto@lemmy.today ⁨1⁩ ⁨day⁩ ago

      Do you realize what you just said!!!

      Wow! They have reached parrot intelligence!

      Next they might teach it to butterfly! You know, like you’re off the ground and going somewhere in open air, but they just keep building shit right where you’re flying… And lamps!

      From there, who knows?!

      source
  • paraphrand@lemmy.world ⁨1⁩ ⁨day⁩ ago

    Precision, nuance, and up to the moment contextual understanding are all missing from the “intelligence.”

    source
    • FaceDeer@fedia.io ⁨1⁩ ⁨day⁩ ago

      So it's about on par with humans, then.

      source
    • Treczoks@lemmy.world ⁨1⁩ ⁨day⁩ ago

      Like the average American with an 8th grade reading comprehension.

      source
      • snooggums@piefed.world ⁨1⁩ ⁨day⁩ ago

        Which is what they used for the training data.

        source
  • HugeNerd@lemmy.ca ⁨11⁩ ⁨hours⁩ ago

    wrinkle: AI used for this study

    source
  • moistclump@lemmy.world ⁨1⁩ ⁨day⁩ ago

    And then I wonder how frequently humans misinterpret the mistranslated news.

    source
    • snooggums@piefed.world ⁨1⁩ ⁨day⁩ ago

      Humans do it often, but they don't have billions of dollars funding their responses.

      source
    • Treczoks@lemmy.world ⁨1⁩ ⁨day⁩ ago

      Worse: One third of adult actually believe the shit the AI produces.

      source
  • AnUnusualRelic@lemmy.world ⁨1⁩ ⁨day⁩ ago

    Yet the LLM seems to be what everyone is pushing, because it will supposedly get better. Haven’t we reached the limits of this model and shouldn’t other types of engines be tried?

    source
    • floofloof@lemmy.ca ⁨1⁩ ⁨day⁩ ago

      shouldn’t other types of engines be tried?

      Sure, but the tricky bit is to be more specific than that.

      source
      • AnUnusualRelic@lemmy.world ⁨1⁩ ⁨day⁩ ago

        Well, you know…

        “Waves vaguely”

        source
  • Korkki@lemmy.ml ⁨1⁩ ⁨day⁩ ago

    The info-sphere today is already a highly delusional place and news can be often contradictory, even from day to day especially by outlets like BCC who is more focused on setting global narratives, not being a reporter of facts as best understood at the moment. No wonder AI would be confused, most readers are confused when navigating every statement made by experts or anonymous officials on every subject. Seems like this study really measured an AI models ability to vomit out the same text in different words and avoiding using any outside context be it accurate or hallucination.

    source
    • floofloof@lemmy.ca ⁨1⁩ ⁨day⁩ ago

      BCC

      source
  • sirico@feddit.uk ⁨1⁩ ⁨day⁩ ago

    So less of a percentage than the readers and mass media

    source
  • Jhex@lemmy.world ⁨1⁩ ⁨day⁩ ago

    buT AI iS hERe tO StAY

    source