Or maybe that is part of the allure of automation: the eschewing of human responsibility, such that any bias in decision making appears benign (the computer deemed it so, no one’s at fault) and any errors - if at all recognized as such - become simply a matter of bug-fixing or model fine-tuning. The more inscrutable the model the better in that sense. The computer becomes an oracle and no one’s to blame for its divinations.
I saw a paper a while back that argued that AI is being used as “moral crumple zones”. For example, an AI used for health insurance acts allows for the company to reject medically necessary procedures without employees incurring as much moral injury as part of that (even low level customer service reps are likely to find comfort in being able to defer to the system.). It’s an interesting concept that I’ve thought about a lot since I found it.
I can absolutely see that. And I don’t think it’s AI-specific, it’s got to do with relegating responsibility to a machine. Of course AI in the guise of LLMs can make things worse with its low interpretability, where it might be even harder to trace anything back to an executive or clerical decision.
I am convinced that law enforcement wants intentionally biased AI decision makers so that they can justify doing what they’ve always done with the cover of “it’s not racist because a computer said so!”
The scary part is most people are ignorant enough to buy it.
gcheliotis@lemmy.world 3 days ago
Or maybe that is part of the allure of automation: the eschewing of human responsibility, such that any bias in decision making appears benign (the computer deemed it so, no one’s at fault) and any errors - if at all recognized as such - become simply a matter of bug-fixing or model fine-tuning. The more inscrutable the model the better in that sense. The computer becomes an oracle and no one’s to blame for its divinations.
AnarchistArtificer@lemmy.world 3 days ago
I saw a paper a while back that argued that AI is being used as “moral crumple zones”. For example, an AI used for health insurance acts allows for the company to reject medically necessary procedures without employees incurring as much moral injury as part of that (even low level customer service reps are likely to find comfort in being able to defer to the system.). It’s an interesting concept that I’ve thought about a lot since I found it.
gcheliotis@lemmy.world 3 days ago
I can absolutely see that. And I don’t think it’s AI-specific, it’s got to do with relegating responsibility to a machine. Of course AI in the guise of LLMs can make things worse with its low interpretability, where it might be even harder to trace anything back to an executive or clerical decision.
2d4_bears@lemmy.blahaj.zone 3 days ago
I am convinced that law enforcement wants intentionally biased AI decision makers so that they can justify doing what they’ve always done with the cover of “it’s not racist because a computer said so!”
The scary part is most people are ignorant enough to buy it.