No. What you see is everything.
Comment on ChatGPT advises women to ask for lower salaries, study finds
mrslt@lemmy.world 8 months agoHow is “threat” being defined in this context? What does the AI interpret as a “threat”?
napkin2020@sh.itjust.works 8 months ago
mrslt@lemmy.world 8 months ago
I figured. I’m just wondering about what’s going on under the hood of the LLM when it’s trying to decide what a “threat” is, absent of additional context.
pinball_wizard@lemmy.zip 8 months ago
Haha. Trained in racism is going on under the hood.
zlatko@programming.dev 8 months ago
Also, there was a comment on “arbitrary scoring for demo purposes”, but it’s still biased, based on biased dataset.
I guess this is just a bait prompt anyway. If you asked most politicians running your government, they’d probably also fail. I guess only people like a national statistics office might come close, and I’m sure if they’re any good, they’d say that the algo is based on “limited, and possibly not representative data” or something.
napkin2020@sh.itjust.works 8 months ago
I also like the touch that only the race part gets the apologizing comment.