Comment on Gender Bias in Text-to-Image Generative Artificial Intelligence When Representing Cardiologists.
TheGrandNagus@lemmy.world 1 month ago
So people are complaining that the depictions of cardiologists, when you ask an image generator to show you one, are too accurate? That they typically show you a picture of a man in a job where the vast majority are men?
You’d likely see the same thing if you asked one to show you a warehouse worker, another job that’s male-dominated.
If people want more women in these roles, push for it at the university level. Push for it in medical posters in hospitals. I don’t see how forcing the hundreds of AI models out there to be biased in favour of depicting women when their training material doesn’t have as many is an effective way of achieving this goal.
Deceptichum@quokk.au 1 month ago
Because the biases in an AI model will shape the perception of people entering those fields more than a poster at a place where people have already entered those fields work at.
Likewise you can train it out of a bias, just feed it more content showing diverse workforces and it will start weighing them higher.
conciselyverbose@sh.itjust.works 1 month ago
Representative of actually real world statistics is not “bias”.
Deceptichum@quokk.au 1 month ago
Statistics are not people.
There are huge biases at play for why gender differences exist in the world place and most other places.
conciselyverbose@sh.itjust.works 1 month ago
Statistics are people.
It cannot possibly, under any circumstances, be a correct, reasonable, or valid word choice to describe an “AI” that accurately models reality as “biased”. That word already has a meaning and using it in that manner is a lie.
TheGrandNagus@lemmy.world 1 month ago
The depiction aligning with reality is not a bias. Artificially altering the algorithm so that it shows more women for this prompt on the other hand is, unquestionably, adding a bias.
If you want to add a bias, fine. Biases aren’t always a bad thing. But don’t pretend that what you’re actually advocating for here is correcting a bias.
That is training in a bias. Because it’s not representative of reality.
conciselyverbose@sh.itjust.works 1 month ago
Exactly. There are perfectly legitimate reasons to wish to bias learning to certain changes in output.
There are no legitimate reasons to do so without acknowledging that you are adding bias and being clear on the intent of doing so.
nyan@lemmy.cafe 1 month ago
And even in cases where introducing a bias is desirable, you have to be very careful when doing it. There has been at least one case where introducing a bias towards diversity has caused problems when the algorithm is asked for images of historical people, who were often not diverse at all.