I would argue that AI also shouldn’t be allowed to make legally binding decisions, like deciding who to hire. Since a computer can’t be held accountable for its decisions, there’s nothing stopping it from blatantly discriminating.
Comment on The job applicants shut out by AI: ‘The interviewer sounded like Siri’
deweydecibel@lemmy.world 8 months ago
And when they got on the phone, Ty assumed the recruiter, who introduced herself as Jaime, was human. But things got robotic.
If regulators are trying to come up with AI regulations, this is where you start.
It should be a law that no LLM/“AI” is allowed to pass itself off as human. They must always state, up front, what they are. No exceptions.
PM_Your_Nudes_Please@lemmy.world 8 months ago
flumph@programming.dev 8 months ago
It should be illegal to use an AI in the hiring process that can’t explain its decisions accurately. There’s too much of a risk of bias in training data to empower a black box system. ChatGPT can lie, so anything powered by it is out.
NoRodent@lemmy.world 8 months ago
They also should not harm a human being or, through inaction, allow a human being to come to harm.
Fiivemacs@lemmy.ca 8 months ago
Yes. I assume anyone from a company is a bot right out of the gate.
admin@lemmy.my-box.dev 8 months ago
If I’m not mistaken, this is one of the core tenets of the EU AI act.
maynarkh@feddit.nl 8 months ago
Since the GDPR, companies are required to give you a detailed breakdown on why an AI would reject you, if the final decision is on the AI. I’m not sure how many companies are complying though, it’s hard to enforce.
admin@lemmy.my-box.dev 8 months ago
Huh? GDPR is about your rights to your personal data, not the algorithms that act upon them.
maynarkh@feddit.nl 8 months ago
Article 22 GDPR:
There is a carve-out if it “is necessary for entering into, or performance of, a contract between the data subject and a data controller”, which nobody seems sure what it means, and it has not been tested in court.