Comment on Researchers jailbreak AI chatbots with ASCII art -- ArtPrompt bypasses safety measures to unlock malicious queries

oDDmON@lemmy.world ⁨8⁩ ⁨months⁩ ago

…researchers from NTU were working on Masterkey, an automated method of using the power of one LLM to jailbreak another.

Or: welcome to where AI becomes an arms race.

source
Sort:hotnewtop