Comment on Researchers jailbreak AI chatbots with ASCII art -- ArtPrompt bypasses safety measures to unlock malicious queries

<- View Parent
Signtist@lemm.ee ⁨6⁩ ⁨months⁩ ago

Oh, I’m sure they’ll patch anything that gets exposed, absolutely. But that’s just it - there are already several examples of people using AI to do non-brand-friendly stuff, but all the developers have to do is go “whoops, patched” and everyone’s fine. They have no need to go out of their way to pay people to catch these issues early; they can just wait until a PR issue happens, patch whatever caused it, and move on.

source
Sort:hotnewtop