Comment on OpenAI’s latest model will block the ‘ignore all previous instructions’ loophole
Grimy@lemmy.world 3 months ago
They already got rid of the loophole a long time ago. It’s a good thing tbh since half the people using local models are doing it because OpenAI won’t let them do dirty roleplay. It’s strengthening their competition and showing why these closed models are such a bad idea, I’m all for it.
felixwhynot@lemmy.world 3 months ago
Did they really? Do you mean specifically that phrase or are you saying it’s not currently possible to jailbreak chatGPT?
Grimy@lemmy.world 3 months ago
They usually take care of a jailbreak the week its made public.