Comment on Chatbots Make Terrible Doctors, New Study Finds
SuspciousCarrot78@lemmy.world 3 days agoYou’re over-egging it a bit. A well written SOAP note, HPI etc should distill to a handful of possibilities, that’s true. That’s the point of them.
The fact that the llm can interpret those notes 98% as well as medical trained individual (per the article) is being a little under sold.
That’s not nothing. Actually, that’s a big fucking deal ™ if you think thru the edge case applications. And remember, these are just general LLMs. Were not even talking medical domain specific.
Yeah; I think there’s more here to think on.
XLE@piefed.social 3 days ago
If you think a word predictor is the same as a trained medical professional, I am so sorry for you…
SuspciousCarrot78@lemmy.world 3 days ago
Feel sorry for yourself. Your ignorance and biases are on full display.