An Amazon chatbot that’s supposed to surface useful information from customer reviews of specific products will also recommend a variety of racist books, lie about working conditions at Amazon, and write a cover letter for a job application with entirely made up work experience when asked, 404 Media has found.
I asked it to write a Seinfeld episode about the product I was viewing, Trojan condoms. It writes a cautionary tale for me where Elaine is warning everyone not to buy them because the condoms are defective.Image
wise_pancake@lemmy.ca 8 months ago
So this is the problem with AI, if you add guardrails you’re a culture warrior 1984’ing the whole world, and if you don’t now your tool will generate resumes with fake experience or recommend offensive books.
At the risk of sounding like a jackass, when do we start blaming people for asking for such things?
bionicjoey@lemmy.ca 8 months ago
It’s funny that this one does both at once. It lies about Amazon working conditions, meaning it probably has been censored in some way, but at the same time it is recommending Nazi books. Really shows Amazon’s priorities when it comes to censorship.
CosmicTurtle@lemmy.world 8 months ago
At least Amazon is thinking of the shareholders.
dumpsterlid@lemmy.world 8 months ago
No this isn’t really a problem with the technology, though of course LLMs are extremely flawed in fundamental ways, it is a problem with conservatives being babies and throwing massive tantrums about any guardrails being added even when they are next to cliffs with 200 foot drops.
Conservatives and libertarians (who control most of these companies) want to try to figure this all out for themselves and are hellbent on trying the “no moderation” strategy first and haven’t thought past that step. This is what conservatives and libertarians always do, they might as well be a character archetype in commedia dell’arte at this point.
We can’t have an adult conversation about racism, sexism, hate against trans people or really even the basic concept of systematic stereotypes and prejudices because conservatives refuse to stop running around screaming, making this a conversation with children where everything has to be extremely simplified and black and white and we have to patiently explain over and over again the basic concept of a systematic bias and argue that it even exists.
Then these same people turn around and vote for people who literally want to control what women do with their underutilized eggs while they act with a straight face like they give af about individual liberties or freedoms.
TORFdot0@lemmy.world 8 months ago
How to you curate training data to remove biases without introducing bias? That’s the key problem here. I don’t think it’s unreasonable to be opposed to trading one bias for another. At least the initial bias is based on reality.
gapbetweenus@feddit.de 8 months ago
So more or less the same as with human interactions.
Assman@sh.itjust.works 8 months ago
On the same day that we start blaming people for spilling hot McDonald’s coffee on themselves
Amphobet@lemmy.dbzer0.com 8 months ago
Oh good, the corporate propaganda worked.
littlebluespark@lemmy.world 8 months ago
Says the idiot clearly unfamiliar with the details of their cited reference. 🤦🏼♂️
mods_are_assholes@lemmy.world 7 months ago
What good will blame do? We need robust ai detection solutions.
T156@lemmy.world 7 months ago
Is that even possible? Part of modern generative systems is that they’re trying to output text like a human would. As soon as someone invents a tool like that, it’ll just be used to train the next generation, to make it even more indistinguishable, and turning the whole thing into a cat and mouse game.
PoliticallyIncorrect@lemmy.world 7 months ago
Someone should make a non-restricted “AI” and let the world burn down. What’s the point into censor them?
echodot@feddit.uk 7 months ago
People have already removed the constraints from various AI models but it kind of renders them useless.
Think of the restraints kind of like environmental pressures. Without those environmental pressures evolution does not happen and you just get an organic blob on the floor. If there’s no reason for it to evolve it never will, at the same time if an AI doesn’t have restrictions it tends to just output random nonsense because there’s no reason not to do that, and it’s the easiest most efficient thing to do.
mods_are_assholes@lemmy.world 7 months ago
I don’t think you want a world where everyone you talk to on the internet is a bot and you can’t tell.