Comment on Grok’s “white genocide” obsession came from “unauthorized” prompt edit, xAI says

<- View Parent
knightly@pawb.social ⁨3⁩ ⁨weeks⁩ ago

“Unintentionally” is the wrong word, because it attributes the intent to the model rather than the people who designed it.

Hallucinations are not an accidental side effect, they are the inevitable result of building a multidimensional map of human language use. People hallucinate, lie, dissemble, write fiction, misrepresent reality, etc. Obviously a system that is designed to map out a human-sounding path from a given system prompt to a particular query is going to take those same shortcuts that people used in its training data.

source
Sort:hotnewtop