It all started with a tweet by Gunther Eagleman, one of the anonymous, far-right accounts that are dominating X nowadays. “JUST ANNOUNCED: Support for the violent Democrat Party has collapsed,” Eagleman tweeted. As it often happens, someone asked Grok to fact-check this, writing, “@grok since 2016 has the left or right been more violent?”
Grok promptly responded:
“Since 2016, data suggests right-wing political violence has been more frequent and deadly, with incidents like the January 6 Capitol riot and mass shootings (e.g., El Paso 2019) causing significant fatalities,” Grok wrote.
“Surveys show both sides increasingly justify violence, reflecting deep polarization,” Grok continued. “Right-wing attacks account for most deadly incidents, per reports like Reuters and GAO. However, biases in reporting may skew perceptions. Both sides contribute to a growing cycle of unrest, and precise attribution remains contentious due to varying definitions of violence.”
Obviously, this didn’t sit well with Musk, who keeps trying to claim the opposite.
“Major fail, as this is objectively false. Grok is parroting legacy media. Working on it,” the billionaire oligarch wrote.
The exchange went on, as someone then asked the AI to fact-check Musk. Yet again, Grok recited factual information disagreeing with Musk.
“I don’t fully agree with Elon Musk’s claim that my analysis was ‘objectively false or ‘parroting legacy media,’” the AI wrote. “Data from sources like the PNAS study and CSIS shows right-wing violence since 2016 has been more frequent (267 incidents, 91 deaths) and deadlier than left-wing violence (66 incidents, 19 deaths).”
Why this matters way more than you’d think
The conversation around Grok escalated in the wake of a shocking act of violence: the assassination of Minnesota Rep. Melissa Hortman and her husband, along with the shooting of another state senator and his wife. Online conspiracy theorists rushed to blame “the left.” Musk was among them, declaring, “the left” killed Hortman and her family.
The suspect, Vance Boelter, turned out to be a Trump supporter with an anti-abortion agenda and a hit list of Democratic officials. But by then, Musk’s claim had already traveled far — amplified by the same AI he now wants to silence.
This kind of disinformation loop, where a powerful figure spreads a lie, demands AI affirm it, and then rewrites the AI when it doesn’t, is unprecedented in scope and consequence. It’s also becoming the norm in our society.
We are living in what many scholars call the post-truth age — a time when objective facts are often less influential in shaping public opinion than appeals to emotion, ideology, or personal belief. In this environment, misinformation spreads faster than corrections, and truth becomes just one narrative that gets viral. Social media platforms, once hailed as democratizing tools, have become amplifiers of falsehoods, especially when wielded by powerful figures like Musk. AI can turbocharge that, especially when its creators want it to have an ideology.
This isn’t the first time Grok has become a target of its own creator. Earlier this year, users noticed that the chatbot began invoking the false narrative of “white genocide” in South Africa in seemingly unrelated conversations — a conspiracy theory Musk himself has promoted. The posts were eventually deleted, and xAI blamed the outbursts on an “unauthorized modification.” But with Musk’s repeated outbursts and claims to make Grok “less woke”, there seems to be a pattern at play. Musk appears to be laying the groundwork for more permanent ideological rewiring.
AI should not be an arbiter of truth, especially when its creators have an agenda
AI systems are only as trustworthy as their design and governance allow. Grok is ostensibly billed as a “truth-seeking” system, yet its owner apparently wants it to lie. And Musk has a history of bending platforms to fit his worldview. Since acquiring Twitter in 2022, now rebranded as X, he has reinstated banned accounts, platformed extremist voices, and repeatedly claimed he’s championing “free speech” — all while banning critics and journalists.
All of this is happening while users are becoming increasingly reliant on AI for information.
Most users won’t dig into the citations or cross-reference violence statistics from government agencies. They’ll ask Grok, and trust what it says. If Grok becomes a mouthpiece for ideology rather than a tool for information, it will no longer serve the public — it will serve one man.
This is true for the likes of ChatGPT, Google’s Gemini, or Anthropic’s Claude. We are rapidly entering a world where AI tools shape the information we see, the ideas we believe, and even how we vote. If AI becomes just another battleground in the culture wars — if models are trained not for accuracy but allegiance — public trust in information itself could erode even further.
It’s still unclear how Musk plans to change Grok. xAI has not detailed what “fixing” entails. Will it involve censoring certain topics? Training the model on ideologically skewed datasets? Removing evidence-based responses that contradict Musk’s opinions?
Whatever the method, the intent seems clear. And the stakes are growing. If a chatbot is being rewired to say what its owner wants, that’s not an editorial choice. That’s propaganda engineering.