Why is AI entering the operating room? Why???
As AI enters the operating room, reports arise of botched surgeries and misidentified body parts
Submitted 7 hours ago by schizoidman@lemmy.zip to technology@lemmy.world
Comments
Thorry@feddit.org 2 hours ago
grue@lemmy.world 2 hours ago
Canconda@lemmy.ca 6 hours ago
To me “Robot Surgeon” means a human surgeon and a programmer got together and meticulously detailed every single step of the procedure such that the machine cannot behave outside of their expectations.
To me robot surgeons should remain machines.
HubertManne@piefed.social 6 hours ago
The only robot surgeons I know of are like the divinci system where a doctor runs it but it allows for smaller incisions and very small and precise movements. I like that one.
frongt@lemmy.zip 5 hours ago
I’ve had it used on me. Can confirm, it’s a massive improvement over human-hand-scale operation.
dylanmorgan@slrpnk.net 2 hours ago
I think of a tiny robot arm and sensor array that lets a human surgeon see and work on smaller parts of a patient than they could otherwise manage safely.
I guess that would be a cyborg surgeon.
phoenixz@lemmy.ca 3 hours ago
Again FFS
Why does nobody understand even the basics of AI?
Yes, there are good applications
Yes, AI has potential access to every information out there
Also yes, AI WILL make shit up and fuck up a good 50odd% of the time
NEVER trust AI. Bebe it for your homework or operation on a patient. It’s great when AI gives you tips and hits, it’s great to function as a rubber ducky, but if you even once blindly trust on what AI wants to do, you’ll be fucked at best, dead at worst.
DO NOT TRUST AI DAMMIT
Armand1@lemmy.world 6 hours ago
Hmmm…
As the article correctly states, machine learning (“AI” is a misnomer that has stuck imo) has been used successfully for decades in medicine.
Machine learning is inherently about spotting patterns and inferring from them. The problem, I think, is two-fold:
-
There are more “AI” products than ever, not all companies build it in responsibly and it’s difficult for regulators to keep up with them.
-
As AI is normalised, some doctors will put too much trust in these systems.
This isn’t helped by the fact that the makers of these products are likely to exaggerate the capabilities of their products. This may be reflected in the products themselves, where they may not properly communicate the degree of certainty of a diagnosis / conclusion (e.g. “30% certainty this lesion is cancerous”)
HubertManne@piefed.social 6 hours ago
It seems like a lot of ai problems is how people treat it. It needs to be treated like a completely naive and inexperienced intern or student or just helper. Everyone should expect that all output has to be carefully looked over like a teacher checking a students work.
-
Gork@sopuli.xyz 7 hours ago
Ask an AI image generator to make images of human anatomy and receive horrors beyond your comprehension.
lmr0x61@lemmy.ml 6 hours ago
AI is creative, in the same sense as creative accounting
ji59@hilariouschaos.com 6 hours ago
Because it’s so accurate or because it isn’t?
dylanmorgan@slrpnk.net 2 hours ago
The Pitt is covering this, and in an early episode from this season they had one of the doctors point out that the LLM transcription incorrectly labeled a medication.
Medicine has a very low tolerance for errors. If I ask ChatGPT what episode of Downton Abbey shows lord whatshisface vomiting blood and it tells me that episode was the Red Wedding, worst case scenario is I look dumb. If Claude tells a doctor “this patient doesn’t have any existing medications that are contraindicated for propofol,” and it’s wrong, that patient may die on the table.