
In late 2022, the world welcomed a new voice into its daily chatter. It was smooth, precise, maybe a little too polished — and profoundly infectious. Just two and a half years later, that voice seems to have influenced our own.
If you’ve found yourself saying you’d like to “delve” into something, or describing a concept as “meticulous,” you may already be under its spell.
These words are not just back in vogue. According to a new study from the Max Planck Institute for Human Development, they are linguistic fingerprints of ChatGPT. And now, they’re becoming ours.
ChatGPT is famously human-sounding in its conversations and prompt outputs because it was trained on virtually everything humans have ever written or were recorded saying on the internet. ChatGPT essentially mimics human communication with some of its own quirks.
What scientists are now claiming is that we’ve come full circle and humans are now mimicking ChatGPT.
A Cultural Feedback Loop, Hidden in Plain Sight
From “realm” to “swift,” a set of distinctly ChatGPT-esque words — dubbed “GPT words” by the researchers — are steadily making their way into human speech. This is a measurable change in how we speak, the researchers argue.
These kinds of words are overly represented in ChatGPT outputs and can even be used to statistically tell machine-written text from human text. For instance, earlier this year, researchers at Florida State University identified 21 words whose frequency in scientific abstracts had spiked unusually high in the past four years with no obvious explanation.

The study, which has not yet been peer-reviewed, analyzed over 740,000 hours of spoken content, spanning more than 360,000 academic YouTube talks and 770,000 podcast episodes across diverse fields. The researchers found a significant rise in GPT words in the 18 months following ChatGPT’s release.
“Our analyses suggest that linguistic preferences of ChatGPT are measurably reshaping how people talk,” the study’s lead author, Hiromu Yakura, told Scientific American. “It’s a cultural feedback loop,” added Levin Brinkmann, a co-author. “We train the machines, they talk back to us, and then we talk like them.”
At first glance, this may seem trivial — even amusing. So what if people now say “delve” a bit more? But the researchers found that these words are surfacing in everyday conversations, particularly in podcasts, where spontaneity is expected.

In fact, the word “delve” — the study’s linguistic canary in the coal mine — saw statistically significant upticks in unscripted podcast conversations across domains like science, business, and education. Even more informal settings weren’t immune. These patterns of conversation suggest deep internalization of the interactions these people have had with ChatGPT.
Because AI chatbots may be reshaping the broader discourse environment, people who have never used ChatGPT might still adopt its vocabulary second-hand.

How Do Machines Change the Way We Speak?

The core question the researchers set out to answer was whether ChatGPT influences spoken communication, and if so, how much. To find out, they designed a quasi-experiment using a method called synthetic control modeling. For each GPT word, they created a “control” composed of similar words not favored by ChatGPT, then tracked how the usage of the GPT word deviated after ChatGPT’s release.
For example, “delve” skyrocketed in use compared to its control words like “explore” or “examine,” which remained flat. The same held true for other GPT words like “comprehend,” “boast,” “swift,” and “meticulous,” which saw annual usage increases of 25% to 50% in academic talks and beyond.
Interestingly, not all GPT words spread equally. In podcast categories like Religion & Spirituality or Sports, “delve” barely moved the needle. But in science and education fields more likely to engage with LLMs, the word gained traction.
This domain-dependent diffusion suggests a two-stage process. First, words favored by ChatGPT spread in LLM-adjacent fields, then they bleed into broader usage through cultural exposure.
The Danger of Cultural Homogenization
Language shapes how we think, what we value, and how we relate to others. “Word frequency can shape our discourse or arguments about situations,” Yakura said. “That carries the possibility of changing our culture.”
The concern isn’t just about individual words. That’s not the point. It’s about the creeping standardization of tone and style. ChatGPT, after all, was trained to be polite, neutral, and structured. But human speech thrives on imperfections: regional quirks, hesitation, bursts of emotion. Flatten those, and we risk sounding less like ourselves.
Already, previous research has shown that AI-generated language can affect how trustworthy or human we perceive someone to be. If a person’s message “sounds like AI,” they may come off as colder, less collaborative, even if they weren’t using AI at all.
The Max Planck researchers raise a red flag: we may be entering a feedback loop where AI not only reflects but reinforces a narrow subset of cultural norms. As Yakura puts it, the real issue isn’t that AI is influencing us — it’s how profoundly and in which direction.
“Generative AI systems favor certain linguistic traits,” the authors write. “If widely adopted, they may accelerate the erosion of linguistic and cultural diversity.”
Worse still, future AI models are trained on data that increasingly includes human interactions already influenced by AI. This recursive process risks locking us into a loop of sameness and staleness, a kind of monoculture of speech, thought, and expression.
What Happens Next?
For centuries, people mimicked the vocabulary of books, newspapers, radio hosts, and now… language models. We tend to imitate the communication patterns of those we find to be wiser and more authoritative than ourselves. The study’s findings can be interpreted that many of us are conceding authority to chatbots and perhaps even subconsciously submitting to them.
Language is a tool but it’s also a mirror. The way we speak reflects who we think we are and maybe who we aspire to be.
But many resist. Some speakers are already swerving away from GPT words, deliberately avoiding linguistic tics they associate with AI. But as some have pointed out on Reddit, this kind of AI phobia can result in embarrassing policies and situations.
So, next time you hear someone say they want to “delve” into a topic, you might wonder: is that their voice or did they use too much ChatGPT lately?
The findings appeared in the preprint server arXiv.