I am convinced that law enforcement wants intentionally biased AI decision makers so that they can justify doing what they’ve always done with the cover of “it’s not racist because a computer said so!”
The scary part is most people are ignorant enough to buy it.
AnarchistArtificer@lemmy.world 2 days ago
I saw a paper a while back that argued that AI is being used as “moral crumple zones”. For example, an AI used for health insurance acts allows for the company to reject medically necessary procedures without employees incurring as much moral injury as part of that (even low level customer service reps are likely to find comfort in being able to defer to the system.). It’s an interesting concept that I’ve thought about a lot since I found it.
gcheliotis@lemmy.world 2 days ago
I can absolutely see that. And I don’t think it’s AI-specific, it’s got to do with relegating responsibility to a machine. Of course AI in the guise of LLMs can make things worse with its low interpretability, where it might be even harder to trace anything back to an executive or clerical decision.