Comment on Microsoft Says Its New AI Diagnosed Patients 4 Times More Accurately Than Human Doctors
Fiivemacs@lemmy.ca 3 weeks agomore accurate.
Until it’s not…then what. Who’s liable? Google…Amazon …Microsoft …chatgpt… Look, I like ai because it’s fun to make stupid memes and pictures without any effort but I do not trust this nonsense to do ANYTHING with accuracy especially my medical.
This thing will 100% be designed to diagnose people to sell you drugs and Not fix your health. Corporations control this. Currently they need to bribe Doctors to push their drugs…this will circumvent that entirely. You’ll end up paying drastically more, for less.
gian@lemmy.grys.it 3 weeks ago
The doctor who review the case, maybe ?
In some cases the AI can effectively “see” things a doctor can miss and direct him to check for a particular disease. Even if the AI is only able to rule out some cases it would be usefull.
technocrit@lemmy.dbzer0.com 3 weeks ago
Yeah that’s why these gains in “efficiency” are completely imaginary.
douglasg14b@lemmy.world 3 weeks ago
Only if you don’t have the critical thinking to understand how information management is a significant problem and barrier to medical Care
boonhet@sopuli.xyz 3 weeks ago
The AI only needs to alert the doctor that something is off and should be tested for. It does not replace doctors, but augments them. It’s actually a great use for AI, it’s just not what we think of as AI in a post-LLM world. The medically useful AI is pattern recognition. LLMs may also help doctors if they need a starting point into researching something weird and obscure, but ChatGPT isn’t being used for diagnosing patients, nor is anything any AI says the “final verdict”. It’s just a tool to improve early detection of disorders, or it might point someone towards an useful article or book.