That’s not really compelling because people would try regardless
Comment on OpenAI's 'Jailbreak-Proof' New Models? Hacked on Day One
Gullible@sh.itjust.works 14 hours agoThey want people to try. It’s independent bug testing that costs only as much as publishing an article on a website
theunknownmuncher@lemmy.world 14 hours ago
Oisteink@feddit.nl 13 hours ago
They have a 500k bounty for jailbreaks.
floo@retrolemmy.com 14 hours ago
They have open beta programs for that while also not having to tell hilarious and bold face lies that end up embarrassing them.
Feyd@programming.dev 12 hours ago
“AI” has a massive inability (or is purposefully deceptive) to distinguish the difference between bugs, which can be fixed, and fundamental aspects of the technology that disqualify it from various applications.
I think the more likely story is that they know this can be done, know about this particular jailbreak person, can replicate their work (because they didn’t so anything they hadn’t done with previous models in the first place), and are straight up lying and betting the people that matter to their next investment round (scam continuation) won’t catch wind.
You’re giving these grifters way too much credit.