Unless the videos have proper depth maps and identifiers for objects and actions they’re not going to be as effective as, say, robot arm surgery data, or vr captured movement and tracking. You’re basically adding a layer to the learning to first process the video correctly into something usable and then learn from that. Not very efficient and highly dependant on cameras and angles.
That’s such a fucking stupid idea.
Care to elaborate why?
From my point of view I don’t see a problem with that. Or let’s say: the potential risks highly depend on the specific setup.
JustARaccoon@lemmy.world 17 hours ago
finitebanjo@lemmy.world 1 day ago
Imagine if the Tesla autopilot without lidar that crashed into things and drove on the sidewalk was actually a scalpel navigating your spleen.
echodot@feddit.uk 22 hours ago
Absolutely stupid example because that kind of assumes medical professionals have the same standard as Elon Musk.
finitebanjo@lemmy.world 22 hours ago
Elon Musk literally owns a medical equipment company that puts chips in peoples brains, nothing is sacred unless we protect it.
echodot@feddit.uk 13 hours ago
Into volunteers it’s not standard practise to randomly put a chip in your head.
Showroom7561@lemmy.ca 1 day ago
Being trained on videos means it has no ability to adapt, improvise, or use knowledge during the surgery.
finitebanjo@lemmy.world 1 day ago
I actually don’t think that’s the problem, the problem is that the AI only factors for visible surface level information.
Showroom7561@lemmy.ca 1 day ago
If you read how they programmed this robot, it seems that it can anticipate things like that. Also keep in mind that this is only designed to do one type of surgery.
I’m cautiously optimist.
I’d still expect human supervision, though.