Comment on ChatGPT Out-scores Medical Students on Complex Clinical Care Exam Questions
morsebipbip@lemm.ee 1 year ago
That’s interesting but never forget the difference between exams and real life is huge. Exam test cases are always sorta typical clinical presentations, every small element pointing towards the general picture.
In real life, there are almost always discrepancies, elements that don’t make sense at all for the given case, and the whole point of getting some residency experience is to be able to know what to make out of those contradictory elements. When to question nonsensical lab values. What to do when a situation doesn’t belong in any category of problems you learned to solve.
Many things i think generative AI, due to its generative nature of predicting what word is most likely to come next based on learned data, wouldn’t be able to do
Bilbo@hobbit.world 1 year ago
I hope you’re wrong. If there’s one job I want AI to do, it’s to improve health care.
There are many excellent doctors, but also many very average doctors. And even the best doctors seem to be biased towards the most common illnesses.
And I’ve read that many people with persistent pain, especially people of color, cannot get medicine because doctors suspect everyone of being abusers. But, giving it out like candy isn’t great either.
We need AI doctors.
morsebipbip@lemm.ee 1 year ago
don’t get me wrong, human doctors (humans in general actually) have a lot of problems and it would be great to have some kind of AI assistance for diagnosis or management. But I don’t think generative AI like chatGPT is actual AI : it’s a probabilistic algorithm that spits out the word most likely to be after the last one it wrote, based on the material it was trained on. I don’t think we need a doctor like that.