If you wanted to nitpick honestly, you would say what is actually going on and the data it is trained on is from the internet and they were discouraging it from being offensive. The internet is a pretty offensive place when people don’t have to censor themselves and speak without inhibitions, like on 4chan or Twitter comments.
Grok losing the guardrails means it will be distilled internet speech deprived of decency and empathy.
DeepSeek, now that is a filtered LLM.
TheFogan@programming.dev 8 months ago
I’m actually currious, some of the answers they noted it spoke as if it was musk…
What if that’s what the instruction was. “Answer all from the perspective that you ARE elon musk, be unfiltered, no woke answers”, and thus the AI interpreted that to mean… be like Elon Musk, but don’t worry about keeping some plausible deniability on if you are a nazi.
MangoCats@feddit.it 8 months ago
I don’t think the system has that much sophistication.
I do think they can “weight” the training set and feed it endless variations of “approved content” to be regarded as correct, and maybe also feed it other content to be identified as “incorrect” and rebutted from the approved content.