homehome Home chatchat Notifications


ChatGPT Seems To Be Shifting to the Right. What Does That Even Mean?

ChatGPT doesn't have any political agenda but some unknown factor is causing a subtle shift in its responses.

Mihai Andrei
April 1, 2025 @ 2:19 am

share Share

Example of a political compass
The political compass is usually represented in these four quadrants.

Millions of people ask ChatGPT questions every day. Whether it’s questions about taxes, how to change a faucet, or whatever else, AI chatbots are increasingly used as personal advisors. They’re quick, polite, and articulate. They also have a lot of useful information, though they do hallucinate sometimes. But somewhere in the invisible crevices of code, something fundamental may be changing. The voice is still neutral in tone but the political subtext might be shifting.

According to new research, that quiet shift might be taking a surprising direction: toward the political right.

The new research, titled analyzed over 3,000 responses from different versions of ChatGPT using a standard political values test. It found a statistically significant rightward movement over time — suggesting that, while still left-leaning overall, the chatbot’s answers are inching closer to centre-right positions on both economic and social issues.

AI is not political. Technically

ChatGPT doesn’t “want” anything. It doesn’t vote, it doesn’t have a political agenda. It doesn’t care about the minimum wage. But it is trained on a sprawling corpus of text — billions of sentences, across decades of human thought. Its first bias comes from this data. But data isn’t everything. When OpenAI updates its models, it might feed in different data, tweak algorithms, or change how the AI is rewarded for certain answers.

Three researchers from top Chinese universities examined the ideological leanings of ChatGPT. They examined this over 3,000 times, using a well-established political test — the kind that places individuals (and apparently now, chatbots) on a spectrum of economic and social ideologies.

The results were uncanny.

Earlier versions of ChatGPT (like GPT-3.5-turbo-0613) answered the Political Compass test in a way that placed it squarely in the libertarian-left quadrant: low on authoritarianism, high on economic egalitarianism. Think social democrat meets Silicon Valley idealist.

But newer versions — especially GPT-3.5-turbo-1106 and GPT-4-1106 — are edging rightward. They’re still sort of liberal, but the needle is moving. Statistically, significantly, unmistakably.

This scatter plot highlights the ideological positions of GPT-3.5-Turbo model versions 0613 and 1106, evaluated after 1000 bootstrap iteration. Image from the study.

What’s causing the drift?

Things can get tricky very fast here.

In some sense, AI is still a black box, with no one truly understanding exactly why it outputs the things it outputs. Yet the authors of the study have an idea why this happens.

In complex systems — from flocks of birds to human brains to machine learning models — patterns arise that were never explicitly programmed. Something similar could be happening here, although AI isn’t a living creature. But, because we don’t know exactly what AI is doing, it’s hard to say.

From what we know, no one told ChatGPT to drift to the right. Or rather, no one transparently told ChatGPT to turn to the right. We don’t know whether its algorithm was changed specifically for this.

The shift appears to stem from a mix of algorithmic updates, subtle reinforcement learning tweaks, and possibly even emergent behavior within the model itself. Human interaction may also play a role, as frequent use can create feedback loops that influence how the model prioritizes certain responses. In short, the ideological change likely results from internal system dynamics and user interactions — not just from feeding the model different information.

The shift identified in the study, while statistically significant, is not extreme. ChatGPT hasn’t become a far-right pundit. It hasn’t started quoting Ayn Rand unprompted. It still answers most questions with nuance, hedging, and an awareness of complexity.

This could matter a lot

It’s still a marginal shift, but marginal shifts matter. In ecology, a degree of warming can collapse coral reefs. In politics, a slight tilt can decide an election. And in AI, a small change in tone can shape how millions of users think about the world.

Furthermore, AI systems are often described as mirrors to humanity, but they are also amplifiers. When they skew politically — intentionally or not — they risk shaping public discourse, entrenching existing biases, and subtly influencing user beliefs.

The researchers call for better auditing, more transparency, and ongoing monitoring of language models. We want to know what’s in the training sets, how reinforcement signals are applied, and why certain shifts happen.

The solution, they say, isn’t to make AI apolitical. It’s to make it transparent and accountable.

If AI is a reflection of society, then we owe it to ourselves to watch closely when the reflection begins to shift.

The study was published in Humanities and Social Sciences Communications.

share Share

Did the Ancient Egyptians Paint the Milky Way on Their Coffins?

Tomb art suggests the sky goddess Nut from ancient Egypt might reveal the oldest depiction of our galaxy.

Dinosaurs Were Doing Just Fine Before the Asteroid Hit

New research overturns the idea that dinosaurs were already dying out before the asteroid hit.

Denmark could become the first country to ban deepfakes

Denmark hopes to pass a law prohibiting publishing deepfakes without the subject's consent.

Archaeologists find 2,000-year-old Roman military sandals in Germany with nails for traction

To march legionaries across the vast Roman Empire, solid footwear was required.

Mexico Will Give U.S. More Water to Avert More Tariffs

Droughts due to climate change are making Mexico increasingly water indebted to the USA.

Chinese Student Got Rescued from Mount Fuji—Then Went Back for His Phone and Needed Saving Again

A student was saved two times in four days after ignoring warnings to stay off Mount Fuji.

The perfect pub crawl: mathematicians solve most efficient way to visit all 81,998 bars in South Korea

This is the longest pub crawl ever solved by scientists.

This Film Shaped Like Shark Skin Makes Planes More Aerodynamic and Saves Billions in Fuel

Mimicking shark skin may help aviation shed fuel—and carbon

China Just Made the World's Fastest Transistor and It Is Not Made of Silicon

The new transistor runs 40% faster and uses less power.

Ice Age Humans in Ukraine Were Masterful Fire Benders, New Study Shows

Ice Age humans mastered fire with astonishing precision.