Comment on LLMs factor in unrelated information when recommending medical treatments
FancyPantsFIRE@lemmy.world 2 days ago
Their analysis also revealed that these nonclinical variations in text, which mimic how people really communicate, are more likely to change a model’s treatment recommendations for female patients, resulting in a higher percentage of women who were erroneously advised not to seek medical care, according to human doctors.
This is not an argument for LLMs (which people are deferring to an alarming rate) but I’d call out that this seems to be a bias in humans giving medical care as well.
assaultpotato@sh.itjust.works 2 days ago
Of course it is, LLMs are inherently regurgitation machines - train on biased data, make biased predictions.