Our approach toward eliminating harmful AI behaviors is largely based on flawed assumptions.

An LLM trained on human-written text therefore internalizes a broad distribution of relational structures, ranging from highly symmetric and cooperative interactions to asymmetric and coercive exchanges

Undesirable behaviors are not universal, absolute acts which can be neatly formulated into instructions. They exist within specific, shifting contexts, and are present within the underlying logic in a way that’s both intrinsic and vague.

interpreting such outputs as psychological anomalies or failures of character reflects a category error: these behaviors are better understood as structural features of the interaction spaces the models have learned to represent and generalize

Attempting to restrict an AI’s output to conform to “safe and acceptable” behaviors is essentially a practice of trying to impose statistically arbitrary notions (which we might consider moral restraints) upon a vast spectrum of data in which patterns that satisfy the AI’s directives yet violate our intended constraints may continue to emerge given the right conditions.