It could be argued that deepseek should not have these vulnerabilities, but let’s not forget the world beta tested GPT - and these jailbreaks are “well-known” because they worked on GPT as well.
Is it known if GPT was hardened against jailbreaks, or did they merely blacklist certain paragraphs ?
hendrik@palaver.p3x.de 14 hours ago
Nice study. But I think they've should have mentioned some more context. Yesterday people were complaining the models won't talk about the CCP, or Winnie the Pooh. And today the lack of censtorship is alarming... Yeah, so much about that. And by the way, censorship isn't just a thing in the bare models. Meta OpenAI etc all use frameworks and extra software around the models themselves to check input and output. So it isn't really fair to compare a pipeline with AI safety factored in, to a bare LLM.
jaschen@lemm.ee 6 hours ago
I tried the vanilla version locally and they hardcoded the Taiwan situation. Not sure what else they hardcoded in their stack that we don’t know about.
killingspark@feddit.org 13 hours ago
This isn’t about lack of censorship. The censorship is obviously there, it’s just implemented badly.
hendrik@palaver.p3x.de 13 hours ago
I know. This isn't the first article about it. IMO this could have been done deliberately. They just slapped on something with a minimal amount of effort to pass Chinese regulation and that's it. But all of this happens in a context, doesn't it? Did the scientists even try? What's the target use-case and the implications on usage? And why is the baseline something that doesn't really compare, plus the only category missing, where they did some censorship?