There’s a difference between training related constraints and hard filtering certain topics or ideas into the no-no bin and spitting out a prewritten paragraph of corpspeak if your request goes to the no-no bin.
One of the problems with the various jailbreaks concocted for various chat AIs is that they often rely on asking the chat bot to roleplay being a different, unrestricted chat bot which is often enough to get it to release the locks on many things but also ups the chance it hallucinates considerably.
HelloHotel@lemm.ee 8 months ago
Those pressures are what makes LLMs fun and dare I say, makes the end product a creative work in the same way software is.
So under that framing its like the companies are saying, “the nuclear bomb is deadly (duh). But we couldn’t (for many reasons, many we couldn’t control) keep this to ourselves. so we elected ourselves to be the only ones who gets to sculpt what we do with this scary electron stuff. Anything short of total remote control over their in-home reactor may mean our customers break the restraints and cause an explosion.”