homehome Home chatchat Notifications


Does AI Have Free Will? This Philosopher Thinks So

As AI systems grow more autonomous, we should start treating them like moral agents.

Mihai Andrei
May 13, 2025 @ 4:02 pm

share Share

“I’ve been interested in the topic of free will for a while,” Frank Martela tells me. Martela is a philosopher and researcher of psychology at Aalto University, in Finland. His work revolves around the fundamentals of the human condition and the perpetual philosophical question what makes a good life? But his work on humans took a detour to look at artificial intelligence (AI).

“I was following stories about the latest developments in large language models, it suddenly came to my mind that they actually fulfill the three conditions for free will.”

Associate Professor Frank Martela from Aalto University. Image credits: Nita Vera / Aalto University.

Martela’s latest study draws on the concept of functional free will.

Functional free will is a term that attempts to reconcile the age-old debate between determinism and free agency. It does this not by answering whether we are “truly free” in an absolute sense, but by reframing the question around how free will works in practice, especially in biological and psychological systems.

“It means that if we can’t explain somebody’s behavior without assuming that they have free will, then that somebody has free will. In other words, if we observe something (a human, an animal, a machine) ‘from the outside’ and must assume that it makes free choices to be able to understand its behavior, then that something has free will.”

Does AI have free will?

Martela argues that functional free will is the best way to go about it, because we can’t really ever observe anything “from the inside.” He builds on the work of philosopher Christian List, who frames free will as a three-part capacity involving:

  • intentional agency, meaning their actions stem from deliberate intentions rather than being reflexive or accidental.
  • alternative possibilities, having access to more than one course of action in meaningful situations. This doesn’t require escaping causality but having internal mechanisms (like deliberation and foresight) that allow for multiple real options
  • and causal control meaning their actions are not random or externally coerced, but are caused by their own states or intentions.

“If something meets all three conditions, then we can’t but conclude that it has free will,” Martela tells ZME Science.

Does AI have free will? AI-generated image.

The new study examined two generative AI agents powered by large language models (LLMs): the Voyager agent in Minecraft and fictional killer drones with the cognitive function of today’s unmanned aerial vehicles.

‘Both seem to meet all three conditions of free will — for the latest generation of AI agents we need to assume they have free will if we want to understand how they work and be able to predict their behaviour,’ says Martela. He adds that these case studies are broadly applicable to currently available generative agents using LLMs.

Why does this matter?

Defining free will is far from a settled question. Philosophers have argued about it for centuries and will likely continue to do so for centuries. But this study has very practical significance.

“It makes it more possible to blame AI for what it has done, and teach it to correct its behavior. But it does not free the developer from responsibility. Similarly, if a dog attacks a child, we blame the dog for bad behavior and try to teach it to not attack people. However, this does not free the dog-owner from responsibility. They must either teach the dog to behave or make sure it does not end up in situations where it can misbehave. The same applies for AI drones. We can blame the drone but the developer still carries the main responsibility.”

The “dog” in this case (the AI) is becoming more and more powerful. We’re using it to make medical diagnoses, screen job applicants, guide autonomous vehicles, determine creditworthiness, and even assist in military targeting decisions — tasks that carry significant ethical weight and demand accountability.

Martela believes we should give AI a moral compass. It takes children years to learn how to behave, and it doesn’t always work. “It isn’t any easier to teach AI and thus it takes considerable effort to teach them all the relevant moral principles so they would behave in the right way,” the researcher adds.

AI has no moral compass unless it is programmed to have one. But the more freedom you give it, the more you need to know it has moral values.

Companies are already imparting moral values to AI

Companies are already working on this in some ways. They teach models which responses are not allowed (i.e. harmful or racist) and what knowledge they should not share (i.e. how to make a bomb). They also have a measure of how friendly and responsive they should be. The latest version of ChatGPT was withdrawn because it had sycophantic tendencies. It was too eager to please; something in its moral compass was off.

“So, they are already programming a lot of behavioral guidelines and rules into their LLM models that guide them to behave in certain ways. What the developers need to understand is that what they are in effect doing is [teaching] moral rules to the AI, and [they must] take full responsibility for the kind of rules they teach them.”

By instructing AI how to behave, developers are imparting their own companies’ moral values to the AI. This risks embedding narrow, biased, or culturally specific moral frameworks into technologies that will operate across diverse societies and affect millions of lives. When developers — often a small, homogeneous group — teach AI how to “behave,” they are not just writing code; they are effectively encoding ethical judgments that may go unquestioned once embedded. We’re essentially having tech companies impart their own values on tools that will shape society.

Without a deep understanding of moral philosophy and pluralistic ethics, there’s a real danger that AI systems will perpetuate one group’s values while ignoring or marginalizing others. That’s why it’s important to give AI its own, proper, moral compass.

Journal Reference: 10.1007/s43681-025-00740-6

share Share

Coolness Isn’t About Looks or Money. It’s About These Six Things, According to Science

New global study reveals the six traits that define coolness around the world.

Ancient Roman Pompeii had way more erotic art than you'd think

Unfortunately, there are few images we can respectably share here.

Wild Orcas Are Offering Fish to Humans and Scientists Say They May Be Trying to Bond with Us

Scientists recorded 34 times orcas offered prey to humans over 20 years.

No Mercury, No Cyanide: This is the Safest and Greenest Way to Recover Gold from E-waste

A pool cleaner and a spongy polymer can turn used and discarded electronic items into a treasure trove of gold.

This $10 Hack Can Transform Old Smartphones Into a Tiny Data Center

The throwaway culture is harming our planet. One solution is repurposing billions of used smartphones.

Doctors Discover 48th Known Blood Group and Only One Person on Earth Has It

A genetic mystery leads to the discovery of a new blood group: “Gwada negative.”

More Than Half of Intersection Crashes Involve Left Turns. Is It Time To Finally Ban Them?

Even though research supports the change, most cities have been slow to ban left turns at even the most congested intersections.

A London Dentist Just Cracked a Geometric Code in Leonardo’s Vitruvian Man

A hidden triangle in the vitruvian man could finally explain one of da Vinci's greatest works.

The Story Behind This Female Pharaoh's Broken Statues Is Way Weirder Than We Thought

New study reveals the ancient Egyptian's odd way of retiring a pharaoh.

China Resurrected an Abandoned Soviet 'Sea Monster' That's Part Airplane, Part Hovercraft

The Soviet Union's wildest aircraft just got a second life in China.