Open Menu
AllLocalCommunitiesAbout
lotide
AllLocalCommunitiesAbout
Login

The Economist on using phrenology for hiring and lending decisions: "Some might argue that face-based analysis is more meritocratic" […] "For people without access to credit, that could be a blessing"

⁨412⁩ ⁨likes⁩

Submitted ⁨⁨2⁩ ⁨weeks⁩ ago⁩ by ⁨cypherpunks@lemmy.ml⁩ to ⁨technology@lemmy.world⁩

https://lemmy.ml/pictrs/image/14f5a0b4-82b6-498d-a689-36c2824dd296.png

cross-posted from: lemmy.ml/post/38830374

screenshot of text “Imagine appearing for a job interview and, without saying a single word, being told that you are not getting the role because your face didn’t fit. You would assume discrimination, and might even contemplate litigation. But what if bias was not the reason? What if your face gave genuinely useful clues about your probable performance at work? That question is at the heart of a recent research”

[…]

screenshot of text “a shorter one. Some might argue that face-based analysis is more meritocratic than processes which reward, say, educational attainment. Kelly Shue of the Yale School of Management, one of the new paper’s authors, says they are now looking at whether AI facial analysis can give lenders useful clues about a person’s propensity to repay loans. For people without access to credit, that could be a blessing.”

tweet

economist article

archive.is paywall bypass

en.wikipedia.org/wiki/Phrenology

source

Comments

Sort:hotnewtop
  • bismuthbob@sopuli.xyz ⁨2⁩ ⁨weeks⁩ ago

    Wow. If a black box analysis of arbitrary facial characteristics is more meritocratic than the status quo, that speaks volumes about the nightmare hellscape shitshow of policy and procedure that resides behind the current set of ‘metrics’ being used.

    source
    • UnderpantsWeevil@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      The gamification of hiring is largely a result of businesses de-institutionalizing Human Resources. If you were hired on at a company like Exxon or IBM in the 1980s, there was an enormous professionalized team dedicated to sourcing prospective hires, vetting them, and negotiating their employment.

      Now, we’ve automated so much of the process and gutted so much of the actual professionalized vetting and onboarding that its a total crap shoot as to whom you’re getting. Applicants aren’t trying to impress a recruiter, they’re just aiming to win the keyword search lottery. Businesses aren’t looking to cultivate talent long term, just fill contract positions at below-contractor rates.

      So we get an influx of pseudo-science to substitute for what had been a real sociological science of hiring. People promising quick and easy answers to complex and difficult questions, on the premise that they can accelerate the churn of staff without driving up cost of doing business.

      source
      • bismuthbob@sopuli.xyz ⁨2⁩ ⁨weeks⁩ ago

        Gotcha. This is replacing one nonsense black box with a different one, then. That makes a depressing kind of sense. No evidence needed, either!

        source
    • bismuthbob@sopuli.xyz ⁨2⁩ ⁨weeks⁩ ago

      All of that being typed, I’m aware that the ‘If’ in my initial response is doing the same amount of heavy lifting as the ‘Some might argue’ in the article. Barring the revelation of some truly extraordinary evidence, I don’t accept the premise.

      source
    • AcidiclyBasicGlitch@sh.itjust.works ⁨2⁩ ⁨weeks⁩ ago

      Spoken like somebody with the sloping brow of a common criminal.

      source
      • scratchee@feddit.uk ⁨2⁩ ⁨weeks⁩ ago

        I really must commend you for overcoming your natural murderous inclinations and managing to become a useful member of society despite the depression in your front lobe. Keep resisting those dark temptations!

        source
        • -> View More Comments
    • technocrit@lemmy.dbzer0.com ⁨2⁩ ⁨weeks⁩ ago

      A primary application of “AI” is providing blackboxes that enable the extremely privileged to wield arbitrary control with impunity.

      source
    • ZILtoid1991@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      Because HR is already using “phrenology”.

      source
  • psycotica0@lemmy.ca ⁨2⁩ ⁨weeks⁩ ago

    "Imagine appearing for a job interview and, without saying a single word, being told that you are not getting the role because your face didn’t fit. You would assume discrimination, and might even contemplate litigation. But what if bias was not the reason?

    Uh… guys…

    Discrimination: the act, practice, or an instance of unfairly treating a person or group differently from other people or groups on a class or categorical basis

    Prejudice: an adverse opinion or leaning formed without just grounds or before sufficient knowledge

    Bias: to give a settled and often prejudiced outlook to

    Judging someone’s ability without knowing them, based solely on their appearance, is, like, kinda the definition of bias, discrimination, and prejudice. I think their stupid angle is “it’s not unfair because what if this time it really worked though!” 😅

    I know this is the point, but there’s no way this could possibly end up with anything other than a lazily written, comically clichéd, Sci Fi future where there’s an underclass of like “class gammas” who have gamma face, and then the betas that blah blah. Whereas the alphas are the most perfect ughhhhh. It’s not even a huge leap; it’s fucking inevitable. That’s the outcome of this.

    I should watch Gattaca again…

    source
    • Tattorack@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      Like every corporate entity, they’re trying to redefine what those words mean. See, it’s not “insufficient knowledge” if they’re using an AI powered facial recognition program to get an objective prediction, right? Right?

      source
      • JackbyDev@programming.dev ⁨2⁩ ⁨weeks⁩ ago

        The most generous thing I can think is that facial structure is not a protected class in the US so it’s technically okay to descriminate against.

        source
    • morriscox@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      People see me in cargo pants, polo shirt, a smartphone in my shirt pocket, and sometimes tech stuff in my (cargo) pants pockets and they assume that I am good at computers. I have an IT background and have been on the Internet since March of 1993 so they are correct. I call it the tech support uniform. However, people could dress similarly to try to fool people.

      People will find ways, maybe makeup and prosthetics or AI modifications, to try to fool this system. Maybe they will learn to fake emotions. This system is a tool, not a solution.

      source
      • MalReynolds@slrpnk.net ⁨2⁩ ⁨weeks⁩ ago

        Goodhart’s law: “When a measure becomes a target, it ceases to be a good measure”

        TLDR as soon as you have a system like this people will game it…

        source
      • HaraldvonBlauzahn@feddit.org ⁨2⁩ ⁨weeks⁩ ago

        Yeah, but is it useful to rob the Mona Lisa?

        source
    • WhyJiffie@sh.itjust.works ⁨2⁩ ⁨weeks⁩ ago

      I think their stupid angle is “it’s not unfair because what if this time it really worked though!”

      I think their angle is “its not unfair because the computer says it!”. automated bias. offloading liability to an AI.

      source
  • panda_abyss@lemmy.ca ⁨2⁩ ⁨weeks⁩ ago

    Racial profiling keeps getting reinvented.

    Fuck that.

    source
    • GenderNeutralBro@lemmy.sdf.org ⁨2⁩ ⁨weeks⁩ ago

      The problem here is education.

      And I’m not just talking about “average joes” who don’t know the first thing about statistics. It is mind-boggling how many people with advanced degrees do not understand the difference between correlation and causation, and will argue until they’re blue in the face that it doesn’t affect results.

      AI is not helping. Modern machine learning is basically a correlation engine with no concept of causation. The idea of using it to predict the future is dead on arrival. The idea of using it in any prescriptive role in social sciences is grotesque; it will never be more than a violation of human dignity.

      Billions upon billions of dollars are being invested in putting lipstick on that pig. At this point it is more lipstick than pig.

      source
    • Jason2357@lemmy.ca ⁨2⁩ ⁨weeks⁩ ago

      I cant imagine a model being trained like this /not/ end up encoding a bunch of features that correlate with race. It will find the white people, then reward its self as the group does statistically better.

      source
      • CheeseNoodle@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        Even a genuinely perfect model would immediately skew to bias; the moment some statistical fluke gets incorporated into the training data that becomes self re-enforcing and it’ll create and then re-enforce that bias in a feedback loop.

        source
        • -> View More Comments
    • ssladam@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      Exactly. It’s like saying that since every president has been over 6’ tall we should only allow tall people to run for president.

      source
  • blubfisch@discuss.tchncs.de ⁨2⁩ ⁨weeks⁩ ago

    Cool. Literal Nazi shit, but now with AI 😵‍💫

    source
    • BreadstickNinja@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      Basically the slogan for the 2020s

      source
    • Krauerking@lemy.lol ⁨2⁩ ⁨weeks⁩ ago

      Yeah but it’s cool cause some rich white guy taught the computer to be racist for him, so you can’t complain.

      source
    • mic_check_one_two@lemmy.dbzer0.com ⁨2⁩ ⁨weeks⁩ ago

      Cool. Literal Nazi shit, still powered by IBM.

      source
  • stickly@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    Image

    source
    • DudeImMacGyver@kbin.earth ⁨2⁩ ⁨weeks⁩ ago

      Well, that sounds insane.

      source
  • AmbitiousProcess@piefed.social ⁨2⁩ ⁨weeks⁩ ago

    The study claims that they analyzed participants’ labor market outcomes, that being earnings and propensity to move jobs, “among other things.”

    Fun fact, did you know white men tend to get paid more than black men for the same job, with the same experience and education?

    Following that logic, if we took a dataset of both black and white men, then used their labor market outcomes to judge which one would be a good fit over another, white men would have higher earnings and be recommended for a job more than black people.

    Black workers are also more likely to switch jobs, one of the reasons likely being because you tend to experience higher salary growth when moving jobs every 2-3 years than when you stay with a given company, which is necessary if you’re already being paid lower wages than your white counterparts.

    By this study’s methodology, that person could be deemed “unreliable” because they often switch jobs, and would then not be considered.

    Essentially, this is a black box that gets to excuse management saying “fuck all black people, we only want to hire whites” while sounding all smart and fancy.

    source
    • shawn1122@sh.itjust.works ⁨2⁩ ⁨weeks⁩ ago

      The goal here is to go back to a world where such racial hieraechies are accepted but without human accountability. This way you are subjugated arbitrarily but hey the computer said so, so what can we do about it?

      source
  • AbidanYre@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    Not April fool’s or the onion? What the fuck?

    source
    • tourist@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      The Economist has a tendency to put out articles seemingly designed to make conservatives bust nuts through their trousers at mach 4

      Is Lucifer’s Poison Ivy destroying the fabric of civilization as we know it?

      source
  • skisnow@lemmy.ca ⁨2⁩ ⁨weeks⁩ ago

    But what if bias was not the reason? What if your face gave genuinely useful clues about your probable performance?

    I hate this so much, because spouting statistics is the number one go-to of idiot racists and other bigots trying to justify their prejudices. The whole fucking point is that judging someone’s value someone based on physical attributes outside their control, is fucking evil, and increasing the accuracy of your algorithm only makes it all the more insidious.

    The Economist has never been shy to post some questionable kneejerk shit in the past, but this is approaching a low even for them. Not only do they give the concept credibility, but they’re even going out of their way to dishonestly paint it as some sort of progressive boon for the poor.

    source
    • mic_check_one_two@lemmy.dbzer0.com ⁨2⁩ ⁨weeks⁩ ago

      But what if bias was not the reason? What if your face gave genuinely useful clues about your probable performance we just agreed to redefine “bias” as something else, despite this fitting the definition of the word perfectly, just so I can claim this isn’t biased?

      source
  • ViatorOmnium@piefed.social ⁨2⁩ ⁨weeks⁩ ago

    Does it predict people that allegedly finished university not knowing the difference between correlation and causality?

    This reminds me of a fraud risk classification model I once heard about, which ended up being an excellent income-by-postal-code classifier.

    source
    • UnderpantsWeevil@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      It predicts people with business school degrees getting six, seven, and eight figure salaries to blow smoke up the asses of the investor pool.

      This reminds me of a fraud risk classification model I once heard about, which ended up being an excellent income-by-postal-code classifier.

      The dark art of sociology is recognizing how poverty impacts human behaviors and then calibrating your business to profit off it.

      source
      • pdxfed@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        Sociology is one of the few majors that Florida is trying to cut from public school programs. They apparently think it radicalizes people to educate them about the way the world works.

        source
  • neidu3@sh.itjust.works ⁨2⁩ ⁨weeks⁩ ago

    This is just phrenology with extra steps

    source
    • ivanafterall@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      They barely even added extra steps.

      source
  • knatschus@discuss.tchncs.de ⁨2⁩ ⁨weeks⁩ ago

    I remember when stuff like this was used to show how dystopian china is.

    source
    • AcidiclyBasicGlitch@sh.itjust.works ⁨2⁩ ⁨weeks⁩ ago

      Haven’t you heard? Palantir CEO Says a Surveillance State Is Preferable to China Winning the AI Race.

      Trump’s current Science Advisor gave an interview back in ~2019 where he kept insisting the U.S. was at a disadvantage to China bc we didn’t have access to the level of surveillance data China had (which it turns out we fucking created and sold to them). He also used this point to argue against any regulations for facial recognition tech because again, it would put us at a disadvantage.

      But don’t worry, because the goal is to have an authoritarian surveillance state with “baked in American values,” so we won’t have to worry about ending up like China did with the surveillance tools we fucking sold them.

      I’m not sure what values he’s claiming will be somehow baked into it (because again, we created it and sold it to China). My mind conjures up scenario of automatic weapons and a speaker playing a screeching bald eagle, but maybe we’ll get some star spangled banner thrown in there too.

      source
    • cypherpunks@lemmy.ml ⁨2⁩ ⁨weeks⁩ ago

      I haven’t heard of academics and/or media from China advocating for applications of phrenology/physiognomy or other related racist pseudosciences. Have you?

      source
  • wilfim@sh.itjust.works ⁨2⁩ ⁨weeks⁩ ago

    This is so absurd I almost feels it isn’t real. But indeed, the article appears when I look it up

    source
    • 0x0@lemmy.zip ⁨2⁩ ⁨weeks⁩ ago

      It’s very nazi Germany real actually.

      source
      • 6nk06@sh.itjust.works ⁨2⁩ ⁨weeks⁩ ago

        I always pity the Germans who don’t deserve this but keep this shame since the war, and it’s worse since nazis became an international club.

        source
  • Tattorack@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    Woaw, we skipped right from diversity hiring to phrenology hiring without wasting a single beat. Boy has the modern world become efreceint.

    source
    • zartemie@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      At least high variance means the possibility of an opposite swing (this is cope)

      source
  • umbraroze@piefed.social ⁨2⁩ ⁨weeks⁩ ago

    Boeing CEO: “We’re always innovating, and sometimes we need to boldly embrace the wisdom of the past if it can be re-examined in light of current technology. From now on, our airplane navigation systems will be based on the Flat Earth model. This makes navigation so much more computationally efficient, guys.”

    source
    • Reverendender@sh.itjust.works ⁨2⁩ ⁨weeks⁩ ago

      So, the US Supreme Court model then.

      source
  • entwine@programming.dev ⁨2⁩ ⁨weeks⁩ ago

    This fascist wave is really bringing out all the cockroaches in our society. It’s a good thing you can’t erase anything on the internet, as this type of evidence will probably be useful in the future.

    You’d better get in on a crypto grift, Kelly Shue of the Yale School of Management. I suspect you’ll have a hard time finding work within the next 1-3 years.

    source
    • 3abas@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      They absolutely can erase things on the internet, are you archiving this for when the other archives die? Are you gonna be able to share it when the time comes? And will anyone care?

      source
      • valek879@sh.itjust.works ⁨2⁩ ⁨weeks⁩ ago

        I have some spare storage! What I want to do with it is either archive very important information, documents and/or scientific papers. I don’t mind of it’s the same shit others have, just want to be part of retaining information. I’m trans and last time fascists were in power we lost 100 years of progress towards being able to exist openly so I’m pretty eager to archive information.

        Either this or it’d be cool to be part of a decentralized database that is searchable and and readable.

        I could probably find somewhere between 1-10 TB to donate to the cause in perpetuity. But I don’t know how to do this myself, what to save, or if there are groups already doing this type of thing.

        source
        • -> View More Comments
  • verdi@feddit.org ⁨2⁩ ⁨weeks⁩ ago

    FYI, it’s not a paper, it’s a blog post from well connected and presumably highly educated people benefiting from the institutional prestige to see tjmheir poorly conducted study be propagated ad eternum without a modicum of relevant peer review.

    source
    • AwesomeLowlander@sh.itjust.works ⁨2⁩ ⁨weeks⁩ ago

      It’s petty reliable, it detected the authors to be psychos

      source
  • oce@jlai.lu ⁨2⁩ ⁨weeks⁩ ago

    I looked for the original article, abstract:

    Human capital—encompassing cognitive skills and personality traits—is critical for labor market success, yet the personality component remains difficult to measure at scale. Leveraging advances in artificial intelligence and comprehensive LinkedIn data, we extract the Big 5 personality traits from facial images of 96,000 MBA graduates, and demonstrate that this novel" Photo Big 5" predicts school rank, compensation, job seniority, industry choice, job transitions, and career advancement. Using administrative records from top-tier MBA programs, we find that the Photo Big 5 exhibits only modest correlations with cognitive measures like GPA and standardized test scores, yet offers comparable incremental predictive power for labor outcomes. Unlike traditional survey-based personality measures, the Photo Big 5 is readily accessible and potentially less susceptible to manipulation, making it suitable for wide adoption in academic research and hiring processes. However, its use in labor market screening raises ethical concerns regarding statistical discrimination and individual autonomy.

    The PDF is downloadable here: scholar.google.com/citations?view_op=view_citatio…

    I don’t have the time nor the expertise to read everything to understand how they take into account the bias that good looking white men with educated parents are way more likely to succeed at life.

    source
    • cypherpunks@lemmy.ml ⁨2⁩ ⁨weeks⁩ ago

      one can also get the full paper directly from yale here without needing to solve a google captcha:

      …yale.edu/…/AI Personality Extraction from Faces …

      I don’t have the time nor the expertise to read everything to understand how they take into account the bias that good looking white men with educated parents are way more likely to succeed at life.

      i admittedly did not read the entire 61 pages but i read enough to answer this:

      spoiler

      they don’t

      source
      • underisk@lemmy.ml ⁨2⁩ ⁨weeks⁩ ago

        Lmao they source the photos from LinkedIn profiles. I’m sure that didn’t bias their training at all. Yes sir there’s no chance this thing is selecting for anything but facial features.

        source
        • -> View More Comments
    • Maeve@kbin.earth ⁨2⁩ ⁨weeks⁩ ago

      How many times has Big 5 been debunked yet employers still like it for reasons?

      source
    • loonsun@sh.itjust.works ⁨2⁩ ⁨weeks⁩ ago

      Hi, Organizational Psychologist here who works in a lab alongside my personality Psych research supervisor. I have never heard of this method, never read this in any of the literature, and am genuinely disgusted anyone got ethics approval to run this study. The only way we use ML to evaluate personality is using natural language due to the tenants of the lexical hypothesis. I have never seen anyone attempt to create a big five measure from facial recognition nor have I ever heard of any theoretical models for personality based on faces…except for phrenology

      source
    • usualsuspect191@lemmy.ca ⁨2⁩ ⁨weeks⁩ ago

      I’m wondering if things like FAS (which can have certain facial characteristics) are muddling the results as well.

      source
  • buttnugget@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    Actually, what if slavery wasn’t such a bad idea after all? Lmao they never stop trying to resurrect class warfare and gatekeeping.

    source
  • gian@lemmy.grys.it ⁨2⁩ ⁨weeks⁩ ago

    Last time did not end well for about 6 million people…

    source
  • _cnt0@sh.itjust.works ⁨2⁩ ⁨weeks⁩ ago

    Race theory 2.0 AI edition just dropped.

    source
  • technocrit@lemmy.dbzer0.com ⁨2⁩ ⁨weeks⁩ ago

    It’s completely normal for fascists to promote pseudo-science.

    Indeed their publication is named after one of the worst pseudo-sciences.

    source
  • Boppel@feddit.org ⁨2⁩ ⁨weeks⁩ ago

    “okay, okay, hear me out: what if nazi methods, but for getting a job. we could even tattoo their number on their arms. it’s only consequent, we already devide by skin colour”

    WTF

    source
  • cabbage@piefed.social ⁨2⁩ ⁨weeks⁩ ago

    Whatever it takes to keep hiring mediocre white men, I guess.

    source
  • uriel238@lemmy.blahaj.zone ⁨2⁩ ⁨weeks⁩ ago

    I thought phrenology was still a science at the time of the German Reich, only made defunct later. Now I have my doubts.

    Social darwinism was disproven in the 1900s and supply-side economics died in the 19th century so it’s not like pseudoscience does not spring up like weeds when rich people want to sponsor it.

    source
  • pelespirit@sh.itjust.works ⁨2⁩ ⁨weeks⁩ ago

    Everyone is kind of focusing on the hiring part, which is incredibly nazi already, but they’re saying for lending too. Fucking yikes.

    source
  • JustJack23@slrpnk.net ⁨2⁩ ⁨weeks⁩ ago

    How long before they start measuring skulls at job interviews

    source
  • humanspiral@lemmy.ca ⁨2⁩ ⁨weeks⁩ ago

    Dystopian neutrality in article.

    without discriminating on grounds of protected characteristics

    AI classification is trained based on supervised (the right answers are predetermined) learning. MechaHitler for the fascist nationalism’s sake, will rate Obama’s face as a poor employee, and Trump’s as the bestest employee.

    Open training data sets would be subject to 1. zero competitive advantage to a model, 2. massive complaints about any specific training data.

    For some jobs, psycopaths AND loyalty are desirable traits, even though they can be opposite traits. Honesty, integrity, intelligence can be desirable traits, or obstacles to desperate loyalty. My point is that if there are many traits determined by faces, much more training data is needed to detect them, and then human hiring decision based on 10 or 30 traits matching to a (impossibly unique) position, where there direct manager only cares about loyalty, without being too talented, but higher level managers might prefer a candidate with potential to replace their direct manager, but all of them care about race or pregnancy risk, and then post training based on some “illegal characteristics”.

    A Gattaca situation where, everyone has easy time getting great job, and moving to a greater job, OR being shut out of all jobs, creates a self contradicting prediction on “loyalty/desperation” controlability traits. If job duties are changed to include blow job services, then surely those agreeable make a better employee, despite any facial ticks responding to suggestion.

    Human silent “illegal discrimination” is not eliminated/changed, but the new capability, you can use a computer to do the interviewing, and waste more interviewees’ time at no human cost to employer is why this will be a success. A warehousing company recently looked at facial expressions to determine attention to safety, and this leads to “The AI punishments to your life will continue until you smile more.” Elysium’s automated parole incident interviews is a good overview of the dystopia.

    source
  • pyre@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    this should be grounds for a prison sentence. open support for Nazism shouldn’t be covered by free speech laws.

    source
  • Brutticus@midwest.social ⁨2⁩ ⁨weeks⁩ ago

    Why stop there? Why just banks and hiring firms? why not allow access to Law Enforcement and use the phrenology robot to screen for pre crime?

    source
  • ICastFist@programming.dev ⁨2⁩ ⁨weeks⁩ ago

    Yeah, nothing says “this person will repay their loans” like looking at their face and nothing fucking else.

    I love how you can just call it capetalismo in portuguese, capeta = devil

    source
-> View More Comments