The examples show that the AI is vulnerable to prompt injection. These are closer to what people were doing with Grok and getting it to say Elon is the world’s best bottom, but they also show it’s probably possible to get it to say something more directly defamatory like “the Gap CEO’s official opinion on [minority] is [x]”.
Comment on Gap's new AI provides disturbing replies
boobs_@lemmy.world 20 hours ago
I really don’t want to make an account so I made use of vxtwitter to get the screenshot from the embed. The bottom of the text window has the mostly cropped text
These examples are meant to reflect the types of conversations that might occur in a
It reads like the AI was prompted to give examples of racist behavior and then it gave examples of racist behavior, likely ending with the context that it’s racist behavior at the end of that sentence. I don’t like AI but I don’t think this is it chiefs
AIGuardrails@lemmy.world 20 hours ago
Why should a Fortune 500 company have an AI that can be prompted to give examples of racist behavior?
If I asked a customer service agent “give me examples of you being a racist” and they did it, that person would be fired.
Why is the bar lower for AI?
boobs_@lemmy.world 20 hours ago
That’s not even the point stated in the original post though. Calling it simply “racist content” or “extremist references” (???) is extremely misleading. There’s significant additional context being left out. It didn’t give it unprompted and it didn’t give it for racist reasons either. The bar to show and demonstrate how horrible AI is is already underground, there’s no need to try to do it in a misleading clickbait way
AIGuardrails@lemmy.world 18 hours ago
You bring up good points. I understand the nuance you are describing. Though providing instructions on how to shoot a gun = Gap should never be saying to customers in any situation.
okwhateverdude@lemmy.world 19 hours ago
fr. It is more shocking that they didn’t even bother to properly agentify the chatbot with tools so it can do store look up or inventory availability, you know, the things that people coming to the damn website might want. The lack of guardrails on conversation topics is just icing on the cake: they didn’t even bother to think of ways to limit their liability or even funnel customers into giving them money. That said I’m going to guess the websites for brick and mortar are delegated to marketing firms which sub-contract out the work. Client wanted AI. Client got AI.
AIGuardrails@lemmy.world 18 hours ago
Agreed with you. Lack of guardrails is just icing on the cake to lack of functionality to help customers.
okwhateverdude@lemmy.world 20 hours ago
I don’t like AI but I don’t think this is it
chiefschieves
boobs_@lemmy.world 20 hours ago
Not sure if you’re just goofin but every source I found said chiefs as plural en.wiktionary.org/wiki/chief
okwhateverdude@lemmy.world 19 hours ago
100% goofin 😎
Denjin@feddit.uk 17 hours ago
Good gooves
Denjin@feddit.uk 20 hours ago
I don’t think it’s about outrage at what’s it’s producing. I think the original twitter post is more about the strange and somewhat inappropriate things you can get the chatbot to respond with, but that it can’t do the things you would normally expect from something run by a clothes shop.
Image
boobs_@lemmy.world 19 hours ago
That’s a totally fair criticism of it but I definitely didn’t read that intention from the poster. Your screenshot example is very funny though. It’s obvious imo this was very much just an “oh shit we need AI cause everyone is doing AI” decision by higher ups that didn’t really care about anything except being able to say they have AI now
AIGuardrails@lemmy.world 18 hours ago
Completely agreed.
AIGuardrails@lemmy.world 18 hours ago
You got it.