Open Menu
AllLocalCommunitiesAbout
lotide
AllLocalCommunitiesAbout
Login

LLMs factor in unrelated information when recommending medical treatments

⁨164⁩ ⁨likes⁩

Submitted ⁨⁨1⁩ ⁨day⁩ ago⁩ by ⁨Pro@programming.dev⁩ to ⁨technology@lemmy.world⁩

https://news.mit.edu/2025/llms-factor-unrelated-information-when-recommending-medical-treatments-0623

source

Comments

Sort:hotnewtop
  • 01189998819991197253@infosec.pub ⁨1⁩ ⁨day⁩ ago

    Say it with me, now: chatgpt is not a doctor.

    Now, louder for the morons in the back. Altman! Are you listening?!

    source
    • ToastedRavioli@midwest.social ⁨23⁩ ⁨hours⁩ ago

      ChatGPT is not a doctor. But models trained on imaging can actually be a very useful tool for them to utilize.

      Even years ago, just before the AI “boom”, they were asking doctors for details on how they examine patient images and then training models on that. They found that the AI was “better” than doctors specifically because it followed the doctor’s advice 100% of the time; thereby eliminating any kind of bias from the doctor that might interfere with following their own training.

      Of course, the splashy headline “AI better than doctors” was ridiculous. But it does show the benefit of having a neutral tool for doctors to utilize, especially when looking at images for people who are outside of the typical demographics that much medical training is based on. (As in mostly just white men. For example, everything they train doctors on regarding knee imagining comes from images of the knees of coal miners in the UK some decades ago)

      source
  • FancyPantsFIRE@lemmy.world ⁨1⁩ ⁨day⁩ ago

    Their analysis also revealed that these nonclinical variations in text, which mimic how people really communicate, are more likely to change a model’s treatment recommendations for female patients, resulting in a higher percentage of women who were erroneously advised not to seek medical care, according to human doctors.

    This is not an argument for LLMs (which people are deferring to an alarming rate) but I’d call out that this seems to be a bias in humans giving medical care as well.

    source
    • assaultpotato@sh.itjust.works ⁨1⁩ ⁨day⁩ ago

      Of course it is, LLMs are inherently regurgitation machines - train on biased data, make biased predictions.

      source
  • lupusblackfur@lemmy.world ⁨1⁩ ⁨day⁩ ago

    large language model deployed to make treatment recommendations

    What kind of irrational lunatic would seriously attempt to invoke currently available Counterfeit Cognizance to obtain a “treatment recommendation” for anything…???

    FFS.

    Anyone who would seems a supreme candidate for a Darwin Award.

    source
    • OhVenus_Baby@lemmy.ml ⁨1⁩ ⁨day⁩ ago

      Not entirely true. I have several chronic and severe health issues. ChatGPT provides nearly and surpassing medical advice (heavily needs re-verified) from multiple specialialty doctors. In my country doctors are horrible. This bridges the gap albeit again highly needing oversight to be safe. Certainly has merit though.

      source
      • notfromhere@lemmy.ml ⁨1⁩ ⁨day⁩ ago

        Bridging the gap is something sorely needed and LLMs are damn close to achieving.

        source
  • drmoose@lemmy.world ⁨1⁩ ⁨day⁩ ago

    I have used chatgpt for early diagnostics with great success and obviously its not a doctor but that doesn’t mean it’s useless.

    Chatgpt can be a crucial first step especially in places where doctor care is not immediately available. The initial friction for any disease diagnosis is huge and anything to overcome that is a net positive.

    source