homehome Home chatchat Notifications


ChatGPT only talks in clichés. That’s a threat to human creativity

When you chat with ChatGPT, it often feels like you’re talking to someone polite, engaged and responsive. It nods in all the right places, mirrors your wording and seems eager to keep the exchange flowing. But is this really what human conversation sounds like? Our new study shows that while ChatGPT plausibly imitates dialogue, it […]

Vittorio Tantucci
September 2, 2025 @ 2:24 pm

share Share

AI-generated image.

When you chat with ChatGPT, it often feels like you’re talking to someone polite, engaged and responsive. It nods in all the right places, mirrors your wording and seems eager to keep the exchange flowing.

But is this really what human conversation sounds like? Our new study shows that while ChatGPT plausibly imitates dialogue, it does so in a way that is stereotypical rather than unique.

Every conversation has quirks. When two family members talk on the phone, they don’t just exchange information — they reuse each other’s words, rework them creatively, interrupt, disagree, joke, banter or wander off-topic.

They do so because human talk is naturally fragmented, but also to enact their own identities in interaction. These moments of “conversational uniqueness” are what make real dialogue unpredictable and deeply human.

We wanted to contrast human conversation with AI ones. So we compared 240 phone conversations between Chinese family members with dialogues simulated by ChatGPT under the same contextual conditions, using a statistical model to measure patterns across hundreds of turns.

To capture human uniqueness in our study, we mainly focused on three levels of human interaction. One was “dialogic resonance”. That’s to do with re-using each other’s expressions. For example, when speaker A says “You never call me”, speaker B may respond “You are the one who never calls”.

Another factor we included was “recombinant creativity”. This involves inventing new twists on what’s just been said by an interlocutor. For example, speaker A may ask “All good?”, to which speaker B responds “All smashing”. Here the structure is kept constant but the adjective is creatively substituted in a way that is unique to the exchange.

A final feature we included was “relevance acknowledgement”: showing interest and recognition of the other’s point, such as “It’s interesting what you said, in fact …” or “That’s a good point …”.

What we found

ChatGPT did remarkably well – even too well – at showing engagement. It often echoed and acknowledged the other speaker even more than humans do. But it fell short in two decisive ways.

First, the lexical diversity was much lower for ChatGPT than for human speakers. Where people varied their words and expressions, AI recycled the same ones.

Most importantly, we spotted a lot of stereotypical speech in the AI-generated conversations. When it simulated giving advice or making requests, ChatGPT defaulted to predictable parental-style recommendations such as “Take care of your health” and “Don’t worry too much”.

This was unlike real human parents who mixed in clarifications, refusals, jokes, sarcasm and even impolite expressions at times. In our data, a far more human way of showing concern for a daughter’s health at college was often through making implications rather than direct instructions — for example, a mother asking, “Why in the world are you juggling two jobs?” with the implied meaning that she will burn out if she keeps being this busy.

In short, ChatGPT statistically flattened human dialogues in the context of our enquiry, replacing them with a polished, plausible but ultimately rather dry template.

Why this matters

At first glance, ChatGPT’s consistency feels like a strength. It makes the system reliable and predictable. Yet these very qualities also make it less human. Real people avoid sounding repetitive. They resist cliches. They build conversations that are recognisably theirs.

This is what defines unique identities in interaction — how we want to be perceived by others. There are words, expressions and intonations you would never use, not necessarily because they are impolite, but because they do not represent who you are or how you want to sound to others.

Being accused of being “boring” is definitely something most people try to avoid; it’s effectively what brings about American playboy Dickie Greenleaf’s death in the famous Patricia Highsmith novel, The Talented Mr Ripley, when he says it of his friend, Tom Ripley. The conversational choices we make are not simply appropriate ways to talk, but strategies for locating ourselves in society and constructing our singular identity with every conversation.

This gap matters in all sorts of ways. If AI cannot capture the uniqueness of human interaction, it risks reinforcing stereotypes of how people ought to speak, rather than reflecting how they actually do. More troubling still, it may promote a new procedural ideology of conversation — one where talk is reduced to sounding engaged yet remains uncreative; a functional but impoverished tool of cooperation.

Our findings suggest that AI is remarkably good at modelling the normative patterns of dialogue — the things people say often and conventionally. But it struggles with the idiosyncratic and unexpected, which are essential for creativity, humour and authentic human conversation.

The danger is not only that AI sounds nothing but plausible. It is that humans, over time, may begin to imitate its style in a way that AI’s stereotyped behaviour may start to reshape conversational norms.

In the long run, we may find ourselves “learning” from AI how to converse — gradually erasing creativity and uniqueness from our own speech. Conversation, at its core, is not just about efficiency. It is about co-creating meaning and social identities through innovation and extravagance, even more than we realise.

What might be at stake, then, assuming AI can’t overcome this problem, is not simply whether it can converse like humans — but whether humans will continue to converse like themselves.

Vittorio Tantucci, Senior lecturer in Linguistics and Chinese Linguistics, Lancaster University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

share Share

Can AI finally show us how animals think?

Can science help you talk to your dog?

This 3D printed circuit board that dissolves in water could finally solve our E-waste problem

This study is putting forward an alternative to our notoriously hard to recycle circuit boards.

Climate Change Triggered European Revolutions That Changed the Course of History

Severe volcanic eruptions may have set the stage for several revolutions.

Inside Palantir: The Secretive Tech Company Helping the US Government Build a Massive Web of Surveillance

Government agencies are contracting with Palantir to correlate disparate pieces of data, promising efficiency but raising civil liberties concerns.

This Chihuahua Munched on a Bunch of Cocaine (and Fentanyl) and Lived to Tell the Tale

This almost-tragic event could have a very useful side.

Old Solar Panels Built in the Early 1990s Are Still Going Strong After 30 Years at 80% Original Power — And That’s a Big Deal for Our Energy Future

Thirty years later, old-school solar panels are still delivering on their promise.

The World’s Largest Solar Plant is Rising in Tibet. It's So Vast It's the Size of Chicago

A desert covered in solar panels and sheep could mark the beginning of the end for coal in China.

A Swiss Pilot Flew a Solar-Electric Aircraft to the Edge of the Stratosphere

A record-breaking flight offers a glimpse into the future of clean aviation

This Newly Discovered Croc Hunted Dinosaurs Before the Asteroid Hit

A new hypercarnivorous crocodyliform emerges from the sediments of Patagonia.

How Tariffs Could Help Canada Wean Itself from Fossil Fuels

Tariffs imposed by the U.S. could give its trading partners space to reduce their economies’ dependence on oil and gas.