That’s also because LARPers of past scary people tend to be more cruel and trashy than their prototypes. The prototypes had a bitter solution to some problem, the LARPers are just trying to be as bad or worse because that’s remembered and they perceive that as respect.
Comment on ChatGPT advises women to ask for lower salaries, study finds
flamingo_pinyata@sopuli.xyz 1 week agoHumans suffer from the same problem. Racism and sexism are consequences of humans training on a flawed dataset, and overfitting the model.
rottingleaf@lemmy.world 1 week ago
x00z@lemmy.world 1 week ago
Politicians shape the dataset, so “flawed” should be “purposefully flawed”.