Thank you, this is far more interesting
Comment on [deleted]
BeliefPropagator@discuss.tchncs.de 6 days agoI found this: simonwillison.net/2025/Jul/11/grok-musk/
UntitledQuitting@reddthat.com 6 days ago
pixxelkick@lemmy.world 6 days ago
That’s more like it, thank you!
TacoEvent@lemmy.zip 5 days ago
It’s possible Grok was fed a massive training set of Elon searches over several more epochs than intended in post training (for search tool use). This could easily lead to this kind of search query output.
unexposedhazard@discuss.tchncs.de 6 days ago
Lmao, sure…
theunknownmuncher@lemmy.world 6 days ago
Yeah, this blogger shows a fundamental misunderstanding of how LLMs work or how system prompts work. LLM behavior is not directly controlled by the system prompt the way this person imagines. For example, censorship that is present in the training set will be “baked in” to the model and the system prompt will not affect it, no matter how the LLM is told not to be censored in that way.
My best guess is that the LLM is interfacing with a tool in order to search through tweets, and the training set that demonstrates how to use the tool contains example searches for Elon Musk’s tweets.
lepinkainen@lemmy.world 6 days ago
“This blogger” is Simon Willison, who has been doing LLM benchmarks and other LLM-related things since before it was cool
Not a random substack grifter
theunknownmuncher@lemmy.world 6 days ago
Yeahhhhh posting blog guides on how to code with ChatGPT is not expertise on LLMs.
Mirodir@discuss.tchncs.de 6 days ago
I can believe it insofar as they might not have explicitly programmed it to do that. I’d imagine they put in something like “Make sure your output aligns with Elon Musk’s opinions.”, “Elon Musk is always objectively correct.”, etc. From there, this would be emergent, but quite predictable behavior.
unexposedhazard@discuss.tchncs.de 6 days ago
Yeah the transparency of it might be unintended.