Comment on OpenAI’s latest model will block the ‘ignore all previous instructions’ loophole

<- View Parent
felixwhynot@lemmy.world ⁨3⁩ ⁨months⁩ ago

Did they really? Do you mean specifically that phrase or are you saying it’s not currently possible to jailbreak chatGPT?

source
Sort:hotnewtop