I mean, it’s fundamental to LLM technology that they listen to user inputs. Those inputs are probablistic in terms of their effects on outputs… So you’re always going to be able to manipulate the outputs, which is kind of the premise of the technology.
It will always be prone to that sort of jailbreak. Feed it vocab, it outputs vocab. Feed it permissive vocab, it outputs permissive vocab.
floo@retrolemmy.com 14 hours ago
Even claiming such a thing is it’s basically pending a huge target on your own back. Regardless of how long it might have taken for those models to be hacked, that timeline is now much shorter and certainly guaranteed.
Gullible@sh.itjust.works 14 hours ago
They want people to try. It’s independent bug testing that costs only as much as publishing an article on a website
Feyd@programming.dev 12 hours ago
“AI” has a massive inability (or is purposefully deceptive) to distinguish the difference between bugs, which can be fixed, and fundamental aspects of the technology that disqualify it from various applications.
I think the more likely story is that they know this can be done, know about this particular jailbreak person, can replicate their work (because they didn’t so anything they hadn’t done with previous models in the first place), and are straight up lying and betting the people that matter to their next investment round (scam continuation) won’t catch wind.
You’re giving these grifters way too much credit.
theunknownmuncher@lemmy.world 14 hours ago
That’s not really compelling because people would try regardless
Oisteink@feddit.nl 13 hours ago
They have a 500k bounty for jailbreaks.
floo@retrolemmy.com 14 hours ago
They have open beta programs for that while also not having to tell hilarious and bold face lies that end up embarrassing them.