I dunno why people even care about this bullshit pseudo-science. The study is dumb AF. They didn’t even use real resumes. They had an LLM generate resumes. It’s all phony smoke and mirrors. The usual “AI” grift.
Study Finds LLMs Biased Against Men in Hiring
Submitted 3 weeks ago by Allah@infosec.pub to technology@lemmy.world
https://www.piratewires.com/p/ai-shows-bias-against-men-in-hiring-study-finds
Comments
technocrit@lemmy.dbzer0.com 3 weeks ago
kozy138@slrpnk.net 3 weeks ago
I feel as though generating these “fake” resumes is one of the top uses for LLMs. Millions of people are probably using LLMs to write their own resumes, so generating random ones seems on par with reality.
MysticKetchup@lemmy.world 3 weeks ago
AbidanYre@lemmy.world 3 weeks ago
What the fuck did I just read?
Allah@infosec.pub 3 weeks ago
ah, mamdani the guy who dehumanized hindus
stephen01king@lemmy.zip 3 weeks ago
That article has a lot of reaching between the proof it claims and its conclusion.
PTSDwarrior@lemmy.ml 3 weeks ago
OP India is right wing garbage and I’m ashamed I ever got featured in their garbage website, once upon a time.
ohwhatfollyisman@lemmy.world 3 weeks ago
and their companies are biased against humans in hiring.
ter_maxima@jlai.lu 3 weeks ago
I don’t care what bias they do and don’t have ; if you use an LLM to select résumés, you don’t deserve to hire me. I make my résumé illegible for LLMs on purpose.
( But don’t follow my advice. I don’t actually need a job so I can pull this kinda nonsense and be selective, most people probably can’t )
patrick@lemmy.bestiver.se 3 weeks ago
How do you make it illegible for LLMs?
Alwaysnownevernotme@lemmy.world 3 weeks ago
You write a creative series of deeply offensive curse words in small white on white print.
ter_maxima@jlai.lu 3 weeks ago
Add a whole bunch of white on white nonsense ! You can also insert letters in the middle of words with a font size of 0, although that fucks up a human copy-pasting too, so probably not recommended.
The simplest way is to make your CV an image, and include no OCR data (or nonsense OCR data) in the PDF
MuskyMelon@lemmy.world 3 weeks ago
Even before LLMs, resumes were processed through keyword filters already. You have to optimize your resume for keyword readers, which should work for LLMs as well.
I use the ARCI model to describe my roles.
burgerpocalyse@lemmy.world 3 weeks ago
these systems cannot run a lemonade stand without shitting their balls
grober_Unfug@discuss.tchncs.de 3 weeks ago
[deleted]catty@lemmy.world 3 weeks ago
Handpicks poor ‘studies’ to justify personal belief that women are better.
technocrit@lemmy.dbzer0.com 3 weeks ago
Handpicked poor study… What do you think this whole post is?
cabbage@piefed.social 3 weeks ago
At least where I'm from it's pretty well known that girls outperform boys in school, probably because their brains develop slightly faster in some ways useful to perform in a class room.
This could give women a head start and very well lead to them on average performing better in work life, until they are forced to choose between careers and families while they partners continue to advance their careers at full speed not worrying about being pregnant.
But that's a different discussion. We should avoid biases in hiring because biases suck and make for an unjust society. And we should stop pretending language models make intelligent considerations about anything.
What's fascinating here is that LLMs trained on the texts we produce create the opposite bias of what we observe in society, where men tend to get preferential treatment. My guess is that this is a consequence of inclusive language. In my writibg, whenever women are under-represented, I make a point out of defaulting to she and her rather than her and him. I know others do the same. I imagine this could feed into LLMs. Whatever it is that causes this, it sure as fuck isn't anything actually intelligent.
cabbage@piefed.social 3 weeks ago
the AI considered
Sorry to break it to you, but the "AI" does not "consider" anything. They are talking about a language prediction model.
jwmgregory@lemmy.dbzer0.com 3 weeks ago
the problematic part of this is that you’ve stripped all context to support your, admittedly bigoted, rhetoric and ethos.
black people, generally, have worse education outcomes than whites in american education. you’d still be an incredibly shitty and terrible person if you advocated hiring white people over black people by wrote rule. you can find plenty of “studies” that formalize that argument just as you have here, though.
no, i think most rational people understand that in a scenario like this all people have, on average, the same basic cognitive faculties and potential, and would then proceed to advocate for improving the educational conditions for groups that are falling behind not due to their own nature, but due to the system they are in.
but idk, i’m not a bigot so maybe my brain just implicitly rejects the idea “X people are worse/less intelligent/etc than Y people”
fucking think about what you’re saying. there is no “right people” to hate other than the rich and powerful. it isn’t a subversion of the feminist message to admit this. in fact, it makes you a better feminist. real feminist aren’t sexist.
protist@mander.xyz 3 weeks ago
This isn’t exactly a comprehensive literature review, and totally misunderstands what a LLM is and does
hendrik@palaver.p3x.de 3 weeks ago
Right. If it's true that women statistically outperform men (with same application documents), it'd be logical to prefer them just on gender alone. Because they likely turn out to be better.
mienshao@lemmy.world 3 weeks ago
Would be cool if the Technology community found literally any other topic to discuss beyond AI. I’m really over it, and I don’t care.
sugar_in_your_tea@sh.itjust.works 3 weeks ago
They’re as biased as the data they were trained on. If that data leaned toward male applicants, then yeah, it makes complete sense.
LovableSidekick@lemmy.world 3 weeks ago
Only half kidding now… the way ethics get extrapolated by today’s moral perfection police, this must mean anti-AI = misogynist.
OutlierBlue@lemmy.ca 3 weeks ago
So we can use Trump’s own anti-DEI bullshit to kill off LLMs now?
thann@lemmy.dbzer0.com 3 weeks ago
Well, ya see, trump isnt racist against computers
berno@lemmy.world 3 weeks ago
Bias was baked in via RLHF and also existed in the datasets used for training. Reddit cancer grows
hendrik@palaver.p3x.de 3 weeks ago
LLMs reproducing stereotypes is a well researched topic. They do that due to what they are. Stereotypes and bias in (in the training data), bias and stereotypes out. That's what they're meant to do. And all AI companies have entire departments to tune that, measure the biases and then fine-tune it to whatever they deem fit.
I mean the issue aren't women or anything, it's using AI for hiring in the first place. You do that if you want whatever stereotypes Anthropic and OpenAI gave to you.
kambusha@sh.itjust.works 3 weeks ago
Just pattern recognition in the end, and extrapolating from that sample size.
hendrik@palaver.p3x.de 3 weeks ago
Issue is they probably want to pattern-recognize something like merit / ability / competence here. And ignore other factors. Which is just hard to do.