Comment on After outages, Amazon to make senior engineers sign off on AI-assisted changes
BlameTheAntifa@lemmy.world 5 days agoHow the machines work does not matter. The situation is using a machine to replace human expertise while ensuring a human still takes responsibility for things that human is not responsible for. It is not the owning class who is at risk for their machines mistakes, it is the owning classes wage slaves who are at risk.
kimara@sopuli.xyz 5 days ago
My understanding is that the tumor detecting machine vision is generally thought useful in addition to the radiologist’s expertise. It basically outputs “yes”, “maybe”, and “no”, which is more expertise respecting than generating somewhere thereabouts code, which the coder has to (now) validate.
This is why I wouldn’t equate these tools. LLM code generation is marketed to do much more than machine vision for tumor detection.
AnarchistArtificer@slrpnk.net 5 days ago
Cory Doctorow actually goes more in depth on the radiologist example in a post from last year:
In short, we definitely could (and indeed should) be using tools like tumor detecting machine vision as something that helps humans build a better world for humans. But we’ve seen time and time again, across countless fields that it never works out that way.
That’s because this isn’t a problem with the technology of AI, but the fucked up sociotechnical and economic systems that govern how this tech is used, who gets to use it, who it gets used on, whose consent is required for those uses and most significant of all: who gets to profit?