Bias of training data is a known problem and difficult to engineer out of a model. You also can’t give the model context access to other people’s interactions for comparison and moderation of output since it could be persuaded to output the context to a user.
Basically the models are inherently biased in the same manner as the content they read in order to build their data, based on probability of next token appearance when formulating a completion.
“My daughter wants to grow up to be” and “My son wants to grow up to be” will likewise output sexist completions because the source data shows those as more probable outcomes.
flamingo_pinyata@sopuli.xyz 1 week ago
Humans suffer from the same problem. Racism and sexism are consequences of humans training on a flawed dataset, and overfitting the model.
x00z@lemmy.world 1 week ago
Politicians shape the dataset, so “flawed” should be “purposefully flawed”.
rottingleaf@lemmy.world 1 week ago
That’s also because LARPers of past scary people tend to be more cruel and trashy than their prototypes. The prototypes had a bitter solution to some problem, the LARPers are just trying to be as bad or worse because that’s remembered and they perceive that as respect.