Comment on After outages, Amazon to make senior engineers sign off on AI-assisted changes
IratePirate@feddit.org 5 days agoPrecisely. From Cory Doctorow’s latest, very insightful essay on AI, where he talks about AI replacing 9 out of 10 radiologists:
“And if the AI misses a tumor, this will be the human radiologist’s fault, because they are the ‘human in the loop.’ It’s their signature on the diagnosis.”
This is a reverse centaur, and it’s a specific kind of reverse-centaur: it’s what Dan Davies calls an “accountability sink.” The radiologist’s job isn’t really to oversee the AI’s work, it’s to take the blame for the AI’s mistakes.
kimara@sopuli.xyz 5 days ago
I don’t think it’s fair to compare LLM code generation to machine vision in this way. These very different "AI"s. Not necessarily disagreeing with Doctorow, but this is an important distinction.
BlameTheAntifa@lemmy.world 5 days ago
How the machines work does not matter. The situation is using a machine to replace human expertise while ensuring a human still takes responsibility for things that human is not responsible for. It is not the owning class who is at risk for their machines mistakes, it is the owning classes wage slaves who are at risk.
kimara@sopuli.xyz 5 days ago
My understanding is that the tumor detecting machine vision is generally thought useful in addition to the radiologist’s expertise. It basically outputs “yes”, “maybe”, and “no”, which is more expertise respecting than generating somewhere thereabouts code, which the coder has to (now) validate.
This is why I wouldn’t equate these tools. LLM code generation is marketed to do much more than machine vision for tumor detection.
AnarchistArtificer@slrpnk.net 5 days ago
Cory Doctorow actually goes more in depth on the radiologist example in a post from last year:
In short, we definitely could (and indeed should) be using tools like tumor detecting machine vision as something that helps humans build a better world for humans. But we’ve seen time and time again, across countless fields that it never works out that way.
That’s because this isn’t a problem with the technology of AI, but the fucked up sociotechnical and economic systems that govern how this tech is used, who gets to use it, who it gets used on, whose consent is required for those uses and most significant of all: who gets to profit?
Frenchgeek@lemmy.ml 5 days ago
The kind of AI doesn’t matter with this situation. Hell, It could be a magic talking rock™ and it change nothing of Mismanagement using a person to avoid blaming their shiny and expensive new toy.
Earthman_Jim@lemmy.zip 5 days ago
“this is an important distinction”
it really isn’t