Comment on LLMs factor in unrelated information when recommending medical treatments
lupusblackfur@lemmy.world 1 day ago
large language model deployed to make treatment recommendations
What kind of irrational lunatic would seriously attempt to invoke currently available Counterfeit Cognizance to obtain a “treatment recommendation” for anything…???
FFS.
Anyone who would seems a supreme candidate for a Darwin Award.
OhVenus_Baby@lemmy.ml 1 day ago
Not entirely true. I have several chronic and severe health issues. ChatGPT provides nearly and surpassing medical advice (heavily needs re-verified) from multiple specialialty doctors. In my country doctors are horrible. This bridges the gap albeit again highly needing oversight to be safe. Certainly has merit though.
notfromhere@lemmy.ml 1 day ago
Bridging the gap is something sorely needed and LLMs are damn close to achieving.