Code used in the analysis is here
How could we possibly expect that there wouldn’t be bias? Is based off the patterns that humans use. Humans have bias. The difference is that humans he can recognize their bias and worked overcome it. As far as i know chat GPT can’t do that.
owenfromcanada@lemmy.world 7 months ago
Hmm… it’s almost like if you train a model on biased data you get biased results…
catloaf@lemm.ee 7 months ago
Garbage in, garbage out. Nothing new here.
Womble@lemmy.world 7 months ago
I like how this statement works for both AI and human recruiters.
Thorny_Insight@lemm.ee 7 months ago
As does most critizism about LLMs. We wanted one that behave like a human and then got angry when that’s exactly what it does.