Dude, I hate AI. I’m not an AI person. Don’t fucking classify me as that. You’re the one not reading the article and subsequently the study. It didn’t say it included the doctor’s diagnostic work. The study wasn’t about whether LLMs are accurate for doctors, that’s already been studied. The study this article talks about literally says that. Apparently LLMs are passing medical licensing exams almost 100% of the time, so it definitely has nothing to do with diagnostic notes. This study was about using LLMs to diagnose yourself. That’s it. That’s the study. Don’t spread bullshit. It’s tiring debunking stuff that is literally two sentences in.
Comment on Chatbots Make Terrible Doctors, New Study Finds
Hacksaw@lemmy.ca 3 days agoIf you seriously think the doctor’s notes about the patient’s symptoms don’t include the doctor’s diagnostic instincts then I can’t help you.
The symptom questions ARE the diagnostic work. Your doctor doesn’t ask you every possible question. You show up and you say “my stomach hurts”. The Doctor asks questions to rule things out until there is only one likely diagnosis then they stop and prescribe you a solution if available. They don’t just ask a random set of questions. If you give the AI the notes JUST BEFORE the diagnosis and treatment it’s completely trivial to diagnose because the diagnostic work is already complete.
God you AI people literally don’t even understand what skill, craft, trade, and art are and you think you can emulate them with a text predictor.
tyler@programming.dev 2 days ago
SuspciousCarrot78@lemmy.world 2 days ago
You’re over-egging it a bit. A well written SOAP note, HPI etc should distill to a handful of possibilities, that’s true. That’s the point of them.
The fact that the llm can interpret those notes 98% as well as medical trained individual (per the article) is being a little under sold.
That’s not nothing. Actually, that’s a big fucking deal ™ if you think thru the edge case applications. And remember, these are just general LLMs. Were not even talking medical domain specific.
Yeah; I think there’s more here to think on.
XLE@piefed.social 2 days ago
If you think a word predictor is the same as a trained medical professional, I am so sorry for you…
SuspciousCarrot78@lemmy.world 2 days ago
Feel sorry for yourself. Your ignorance and biases are on full display.