homehome Home chatchat Notifications


ChatGPT’s artificial empathy is a language trick. Here’s how it works

Chat bots can mimic empathy -- but they don't actually feel it.

Mihai Andrei
December 1, 2024 @ 11:58 am

share Share

AI-generated image.

Anthropomorphism occurs when we attribute human characteristics to non-human entities like animals or machines. Chatbots, such as ChatGPT, Gemini and Copilot, encourage this by imitating human language to communicate with us, going beyond using mere familiar words and phrases to adopt human patterns of communication. By doing this, they are able to hold contextualised, coherent conversations, and can even show emotions like humour and empathy.

These systems use language that aims to make interaction with them feel natural, fluent and accessible. It facilitates their use in different situations, from customer service to education and entertainment.

Until now, language was considered to be a uniquely human ability. However, advances in generative artificial intelligence (AI), which powers these chatbots, are forcing us to reconsider this idea.

One of the most distinctive traits of human language is subjectivity, which manifests in words or expressions that convey emotional nuance, as well as the expression of personal opinions. It also includes forming opinions about events, and the use of contextual or cultural elements.

The chatbot ELIZA, released in 1966, simulated human behaviour, taking on the role of a psychologist in its conversations with users. Wikimedia Commons, CC BY

The first person: “I” and “We”

One typical feature of human subjectivity in language is the use of personal pronouns. The first person – “I” or “we” – allows us to express personal thoughts and experiences. The second person – “you” – engages with the other person, building a relationship between the two participants in a conversation. This is known as intersubjectivity.

Let’s look at a relatively straightforward example:

  • User: “I’m organising my home. How can I decide which items to keep, donate or throw away?”
  • Chatbot: “Great question! Organising your belongings can be overwhelming, but with a clear strategy you can make these decisions easier. Here are some ideas for deciding what to keep, donate or throw away.”

The chatbot uses the first person implicitly. While it doesn’t say “I” or “me”, it takes on the role of an advisor or guide, and phrases like “here are some ideas” present the ideas as the chatbot’s own.

The chatbot assumes the role of the helper, meaning the user feels like they are being addressed personally, even if the first person is not used explicitly. Furthermore, the use of “here are” reinforces the image of the chatbot as someone who offers something of value.

The second person: the power of “you”

“You” (and its other forms like “your”) directly address the user. We can see this in several parts of the previous example, like the phrase “organising your belongings” and “you can make these decisions easier”.

By talking to you in a personal way, the chatbot aims to make the reader feel like they are playing an active part in the conversation, and this kind of language is common in texts that seek to make another person feel actively involved.

Other phrases, such as “Great question!”, not only give a positive impression of the user’s request, they also encourage them to engage. Phrases like “organising your belongings can be overwhelming” suggest a shared experience, creating an illusion of empathy by acknowledging the user’s emotions.

Artificial empathy

The chatbot’s use of the first person simulates awareness and seeks to create an illusion of empathy. By adopting a helper position and using the second person, it engages the user and reinforces the perception of closeness. This combination generates a conversation that feels human, practical, and appropriate for giving advice, even though its empathy comes from an algorithm, not from real understanding.

Getting used to interacting with non-conscious entities that simulate identity and personality may have long term repercussions, as these interactions can influence our personal, social and cultural lives. As these technologies improve, it will get harder and harder to distinguish a conversation with a real person from one with an AI system.

This increasingly blurred boundary between the human and the artificial affects how we understand authenticity, empathy and conscious presence in communication. We may even come to address AI chatbots as if they were conscious beings, generating confusion about their real capabilities.

Struggling to talk to other humans

Interactions with machines can also change our expectations of human relationships. As we become accustomed to quick, seamless, conflict-free interactions, we may become more frustrated in our relationships with real people.

Human interactions are coloured by emotions, misunderstandings and complexity. In the long run, repeated interactions with chatbots may diminish our patience and ability to handle conflict and accept the natural imperfections in interpersonal interactions.

Furthermore, prolonged exposure to simulated human interaction raises ethical and philosophical dilemmas. By attributing human qualities to these entities – such as the ability to feel or have intentions – we might begin to question the value of conscious life versus perfect simulation. This could open up debates about robot rights and the value of human consciousness.

Interacting with non-sentient entities that mimic human identity can alter our perception of communication, relationships and identity. While these technologies can offer greater efficiency, it is essential to be aware of their limits and the potential impacts on how we interact, both with machines and with each other.


Cristian Augusto Gonzalez Arias, Investigador, Universidade de Santiago de Compostela

This article is republished from The Conversation under a Creative Commons license. Read the original article.

share Share

Coolness Isn’t About Looks or Money. It’s About These Six Things, According to Science

New global study reveals the six traits that define coolness around the world.

Ancient Roman Pompeii had way more erotic art than you'd think

Unfortunately, there are few images we can respectably share here.

Wild Orcas Are Offering Fish to Humans and Scientists Say They May Be Trying to Bond with Us

Scientists recorded 34 times orcas offered prey to humans over 20 years.

No Mercury, No Cyanide: This is the Safest and Greenest Way to Recover Gold from E-waste

A pool cleaner and a spongy polymer can turn used and discarded electronic items into a treasure trove of gold.

This $10 Hack Can Transform Old Smartphones Into a Tiny Data Center

The throwaway culture is harming our planet. One solution is repurposing billions of used smartphones.

Doctors Discover 48th Known Blood Group and Only One Person on Earth Has It

A genetic mystery leads to the discovery of a new blood group: “Gwada negative.”

More Than Half of Intersection Crashes Involve Left Turns. Is It Time To Finally Ban Them?

Even though research supports the change, most cities have been slow to ban left turns at even the most congested intersections.

A London Dentist Just Cracked a Geometric Code in Leonardo’s Vitruvian Man

A hidden triangle in the vitruvian man could finally explain one of da Vinci's greatest works.

The Story Behind This Female Pharaoh's Broken Statues Is Way Weirder Than We Thought

New study reveals the ancient Egyptian's odd way of retiring a pharaoh.

China Resurrected an Abandoned Soviet 'Sea Monster' That's Part Airplane, Part Hovercraft

The Soviet Union's wildest aircraft just got a second life in China.