Sorry, this needs more clarification! Do you mean “intent recognition” where some AI, trained with biased data, will assume that some brown person is upto no good? Or do you mean that they will misidentify black and brown people more often due to how cameras work? Because the latter has nothing to do with biased data.
yeather@lemmy.ca 11 months ago
Both in fact. Training data for things like this regularly mix up minority people. If Omar is a upstanding citizen, but gets his face mixed with Haani, known terrorist, Omar gets treated unfairly, potentially to the point of lethality.
bobgusford@lemmy.world 11 months ago
For “intent recognition”, I agree. A system trained on data of mostly black committing crimes might flag more black people with ill intent.
But for the sake of identification at security checkpoints, if a man named Omar - who has an eerie resemblance to Haani the terrorist - walks through the gates, then they probably need to do a more thorough check. If they confirm with secondary data that Omar is who he says he is, then the system needs to be retrained on more images of Omar. The bias was only that they didn’t have enough images of Haani and Omar for the system to make a good enough distinction. With more training, it will probably be less biased and more accurate than a human.