Comment on Researchers jailbreak AI chatbots with ASCII art -- ArtPrompt bypasses safety measures to unlock malicious queries

spujb@lemmy.cafe ⁨8⁩ ⁨months⁩ ago

ooh hot take. reasearchers should stop doing security testing for OpenAI for free. aren’t they just publishing these papers, with full details on how it might be fixed, with no compensation for that labor?

bogus. this should work more like pen testing or finding zero day exploits. make these capitalist “oPeN” losers pay to secure the shit they create.

(pls tell me why im wrong if i am instead of downvoting, just spitballing here)

source
Sort:hotnewtop