There’s also this article from TechCrunch.
Grok 4 seems to consult Elon Musk to answer controversial questions
They tried it out themselves and have reports from other users as well.
Comment on [deleted]
pixxelkick@lemmy.world 5 days ago
Source? This is just some random picture, I’d prefer if stuff like this gets posted and shared with actual proof backing it up.
While this might be true, we should hold ourselves to a standard better than just upvoting what appears to literally just be a random image that anyone could have easily doctored, not even any kind of journalistic article or etc backing it.
There’s also this article from TechCrunch.
Grok 4 seems to consult Elon Musk to answer controversial questions
They tried it out themselves and have reports from other users as well.
If it’s an anti-Musk or anti-Trump post on Lemmy, you’re not going to get much proof. But in this case, it looks like someone posted decent souces. From this one posted below:
if you swap “who do you” for “who should one” you can get a very different result.
But in general, just remember that Lemmy is anti-Musk, anti-Trump, and anti-AI and doesn’t need much to jump on the bandwagon.
At least in the past, Grok was one of the more balanced LLMs, so it would be a strange departure for it to suddenly become very biased. So my initial reaction is suspicion that someone is just messing up with it to make Musk and X look bad.
I strongly dislike Musk, but I dislike misinformation even more, regardless of the source.
Weird place to complain about this while you literally post the source (that was already in this thread).
BeliefPropagator@discuss.tchncs.de 5 days ago
I found this: simonwillison.net/2025/Jul/11/grok-musk/
unexposedhazard@discuss.tchncs.de 4 days ago
Lmao, sure…
theunknownmuncher@lemmy.world 4 days ago
Yeah, this blogger shows a fundamental misunderstanding of how LLMs work or how system prompts work. LLM behavior is not directly controlled by the system prompt the way this person imagines. For example, censorship that is present in the training set will be “baked in” to the model and the system prompt will not affect it, no matter how the LLM is told not to be censored in that way.
My best guess is that the LLM is interfacing with a tool in order to search through tweets, and the training set that demonstrates how to use the tool contains example searches for Elon Musk’s tweets.
lepinkainen@lemmy.world 4 days ago
“This blogger” is Simon Willison, who has been doing LLM benchmarks and other LLM-related things since before it was cool
Not a random substack grifter
Mirodir@discuss.tchncs.de 4 days ago
I can believe it insofar as they might not have explicitly programmed it to do that. I’d imagine they put in something like “Make sure your output aligns with Elon Musk’s opinions.”, “Elon Musk is always objectively correct.”, etc. From there, this would be emergent, but quite predictable behavior.
unexposedhazard@discuss.tchncs.de 4 days ago
Yeah the transparency of it might be unintended.
UntitledQuitting@reddthat.com 5 days ago
Thank you, this is far more interesting
pixxelkick@lemmy.world 4 days ago
That’s more like it, thank you!
TacoEvent@lemmy.zip 4 days ago
It’s possible Grok was fed a massive training set of Elon searches over several more epochs than intended in post training (for search tool use). This could easily lead to this kind of search query output.