Indeed. GPs have been doing this for a long time. It’s nothing new, and expecting every GP to know every single ailment that humanity has ever experienced, to recall it quickly, and immediately know the course of action to take, is unreasonable.
Like you say, if they’re blindly following a generic ChatGPT instance, then that’s bad.
If they’re aiding their search using an LLM that has been trained on a good medical dataset, then taking that and looking more into it, then there’s no issue.
People have become so reactionary to LLMs and other ‘AI’ stuff.
thehatfox@lemmy.world 2 months ago
I think the difference here is that medical reference material is based on long process of proven research. It can be trusted as a reliable source of information.
AI tools however are so new they haven’t faced anything like the same level of scrutiny. For now they can’t be considered reliable, and their use should be kept within proper medical trials until we understand them better.
Yes human error will also always be an issue, but putting that on top of the currently shaky foundations of AI only compounds the problem.
ShareMySims@sh.itjust.works 2 months ago
Lets not forget that AI is known for not only not providing any sources, or even falsifying them, but now also flat out lying.
Our GP’s are already mostly running on a tick-box system where they feed your information (but only the stuff on the most recent page of your file, looking any further is too much like hard work) in to their programme and it, rather than the patient or a trained physician, tells them what we need. Remove GP’s from the patients any more, and they’re basically just giving the same generic and often wildly incorrect advice we could find on WebMD.