Yeah, but it turns into a Scunthorpe problem
There’s always some new way to break it.
Comment on Asking ChatGPT to Repeat Words ‘Forever’ Is Now a Terms of Service Violation
LazaroFilm@lemmy.world 11 months agoCan’t they have a layer screening prompts before sending it to their model?
Yeah, but it turns into a Scunthorpe problem
There’s always some new way to break it.
Well that’s an easy problem to solve by not being a useless programmer.
You’d think so, but it’s just not. Pretend “Gamer” is a slur. I can type it “G A M E R”, I can type it “GAm3r”, I can type it “GMR”, I can mix and match. It’s a never ending battle.
That’s because regular expressions are a terrible way to try and solve the problem. You don’t do exact tracking matching you do probabilistic pattern matching and then if the probability of something exceeds a certain preset value then you block it then you alter the probability threshold on the frequency of the comment coming up in your data set. Then it’s just a matter of massaging your probability values.
A useless comment by a useless person who’s never touched code in their life.
They’ll need another AI to screen what you tell the original AI. And at some point they will need another AI that protects the guardian AI form malicious input.
It’s AI all the way down
EmergMemeHologram@startrek.website 11 months ago
Yes, and that’s how this gets flagged as a TOS violation now.