That’s more like it, thank you!
Comment on [deleted]
BeliefPropagator@discuss.tchncs.de 8 months agoI found this: simonwillison.net/2025/Jul/11/grok-musk/
pixxelkick@lemmy.world 8 months ago
unexposedhazard@discuss.tchncs.de 8 months ago
I think there is a good chance this behavior is unintended!
Lmao, sure…
theunknownmuncher@lemmy.world 8 months ago
If the system prompt doesn’t tell it to search for Elon’s views, why is it doing that?
My best guess is that Grok “knows” that it is “Grok 4 buit by xAI”, and it knows that Elon Musk owns xAI, so in circumstances where it’s asked for an opinion the reasoning process often decides to see what Elon thinks.
Yeah, this blogger shows a fundamental misunderstanding of how LLMs work or how system prompts work. LLM behavior is not directly controlled by the system prompt the way this person imagines. For example, censorship that is present in the training set will be “baked in” to the model and the system prompt will not affect it, no matter how the LLM is told not to be censored in that way.
My best guess is that the LLM is interfacing with a tool in order to search through tweets, and the training set that demonstrates how to use the tool contains example searches for Elon Musk’s tweets.
lepinkainen@lemmy.world 8 months ago
“This blogger” is Simon Willison, who has been doing LLM benchmarks and other LLM-related things since before it was cool
Not a random substack grifter
theunknownmuncher@lemmy.world 8 months ago
Yeahhhhh posting blog guides on how to code with ChatGPT is not expertise on LLMs.
Mirodir@discuss.tchncs.de 8 months ago
I can believe it insofar as they might not have explicitly programmed it to do that. I’d imagine they put in something like “Make sure your output aligns with Elon Musk’s opinions.”, “Elon Musk is always objectively correct.”, etc. From there, this would be emergent, but quite predictable behavior.
unexposedhazard@discuss.tchncs.de 8 months ago
Yeah the transparency of it might be unintended.
UntitledQuitting@reddthat.com 8 months ago
Thank you, this is far more interesting
TacoEvent@lemmy.zip 8 months ago
It’s possible Grok was fed a massive training set of Elon searches over several more epochs than intended in post training (for search tool use). This could easily lead to this kind of search query output.