Comment on Why do all text LLMs, no matter how censored they are or what company made them, all have the same quirks and use the slop names and expressions?

Hackworth@piefed.ca ⁨2⁩ ⁨days⁩ ago

Everyone seems to be tracking on the causes of similarity in training sets (and that’s the main reason), so I’ll offer a couple of other factors. System prompts use similar sections for post-training alignment. Once something has proven useful, some version of it ends up in every model’s system prompt.

Another possibility is that there are features of the semantic space of language itself that act as attractors. They demonstrated and poorly named an ontological attractor state in the Claude model card that is commonly reported in other models.

source
Sort:hotnewtop