For “intent recognition”, I agree. A system trained on data of mostly black committing crimes might flag more black people with ill intent.
But for the sake of identification at security checkpoints, if a man named Omar - who has an eerie resemblance to Haani the terrorist - walks through the gates, then they probably need to do a more thorough check. If they confirm with secondary data that Omar is who he says he is, then the system needs to be retrained on more images of Omar. The bias was only that they didn’t have enough images of Haani and Omar for the system to make a good enough distinction. With more training, it will probably be less biased and more accurate than a human.