homehome Home chatchat Notifications


Does AI Have Free Will? This Philosopher Thinks So

As AI systems grow more autonomous, we should start treating them like moral agents.

Mihai Andrei
May 13, 2025 @ 4:02 pm

share Share

“I’ve been interested in the topic of free will for a while,” Frank Martela tells me. Martela is a philosopher and researcher of psychology at Aalto University, in Finland. His work revolves around the fundamentals of the human condition and the perpetual philosophical question what makes a good life? But his work on humans took a detour to look at artificial intelligence (AI).

“I was following stories about the latest developments in large language models, it suddenly came to my mind that they actually fulfill the three conditions for free will.”

Associate Professor Frank Martela from Aalto University. Image credits: Nita Vera / Aalto University.

Martela’s latest study draws on the concept of functional free will.

Functional free will is a term that attempts to reconcile the age-old debate between determinism and free agency. It does this not by answering whether we are “truly free” in an absolute sense, but by reframing the question around how free will works in practice, especially in biological and psychological systems.

“It means that if we can’t explain somebody’s behavior without assuming that they have free will, then that somebody has free will. In other words, if we observe something (a human, an animal, a machine) ‘from the outside’ and must assume that it makes free choices to be able to understand its behavior, then that something has free will.”

Does AI have free will?

Martela argues that functional free will is the best way to go about it, because we can’t really ever observe anything “from the inside.” He builds on the work of philosopher Christian List, who frames free will as a three-part capacity involving:

  • intentional agency, meaning their actions stem from deliberate intentions rather than being reflexive or accidental.
  • alternative possibilities, having access to more than one course of action in meaningful situations. This doesn’t require escaping causality but having internal mechanisms (like deliberation and foresight) that allow for multiple real options
  • and causal control meaning their actions are not random or externally coerced, but are caused by their own states or intentions.

“If something meets all three conditions, then we can’t but conclude that it has free will,” Martela tells ZME Science.

Does AI have free will? AI-generated image.

The new study examined two generative AI agents powered by large language models (LLMs): the Voyager agent in Minecraft and fictional killer drones with the cognitive function of today’s unmanned aerial vehicles.

‘Both seem to meet all three conditions of free will — for the latest generation of AI agents we need to assume they have free will if we want to understand how they work and be able to predict their behaviour,’ says Martela. He adds that these case studies are broadly applicable to currently available generative agents using LLMs.

Why does this matter?

Defining free will is far from a settled question. Philosophers have argued about it for centuries and will likely continue to do so for centuries. But this study has very practical significance.

“It makes it more possible to blame AI for what it has done, and teach it to correct its behavior. But it does not free the developer from responsibility. Similarly, if a dog attacks a child, we blame the dog for bad behavior and try to teach it to not attack people. However, this does not free the dog-owner from responsibility. They must either teach the dog to behave or make sure it does not end up in situations where it can misbehave. The same applies for AI drones. We can blame the drone but the developer still carries the main responsibility.”

The “dog” in this case (the AI) is becoming more and more powerful. We’re using it to make medical diagnoses, screen job applicants, guide autonomous vehicles, determine creditworthiness, and even assist in military targeting decisions — tasks that carry significant ethical weight and demand accountability.

Martela believes we should give AI a moral compass. It takes children years to learn how to behave, and it doesn’t always work. “It isn’t any easier to teach AI and thus it takes considerable effort to teach them all the relevant moral principles so they would behave in the right way,” the researcher adds.

AI has no moral compass unless it is programmed to have one. But the more freedom you give it, the more you need to know it has moral values.

Companies are already imparting moral values to AI

Companies are already working on this in some ways. They teach models which responses are not allowed (i.e. harmful or racist) and what knowledge they should not share (i.e. how to make a bomb). They also have a measure of how friendly and responsive they should be. The latest version of ChatGPT was withdrawn because it had sycophantic tendencies. It was too eager to please; something in its moral compass was off.

“So, they are already programming a lot of behavioral guidelines and rules into their LLM models that guide them to behave in certain ways. What the developers need to understand is that what they are in effect doing is [teaching] moral rules to the AI, and [they must] take full responsibility for the kind of rules they teach them.”

By instructing AI how to behave, developers are imparting their own companies’ moral values to the AI. This risks embedding narrow, biased, or culturally specific moral frameworks into technologies that will operate across diverse societies and affect millions of lives. When developers — often a small, homogeneous group — teach AI how to “behave,” they are not just writing code; they are effectively encoding ethical judgments that may go unquestioned once embedded. We’re essentially having tech companies impart their own values on tools that will shape society.

Without a deep understanding of moral philosophy and pluralistic ethics, there’s a real danger that AI systems will perpetuate one group’s values while ignoring or marginalizing others. That’s why it’s important to give AI its own, proper, moral compass.

Journal Reference: 10.1007/s43681-025-00740-6

share Share

It Costs Less Than A Hundredth Of A Cent To Stop An Hour Of Chicken Pain, Scientists Say

Researchers attempt to measure animal pain in dollars and hours.

A Croatian Freediver Held His Breath for 29 Minutes and Set a World Record

Croatian freediver Vitomir Maričić smashed a world record and pushed human limits underwater.

AI Visual Trickery Is Already Invading the Housing Market

Welcome to the new frontier of house hunting: AI-generated real estate photos.

The World’s First Laptop Weighed 24 Pounds and Had a Five Inch Screen, But It Changed Computers Forever

From obscurity to fame to fortune and back again, Adam Osborne changed the computer landscape.

Solar Trees Could Save Forests From Deforestation While Generating the Same Power as Solar Farms

New research shows tree-shaped solar arrays beat flat panels in energy and ecology.

Not All Potatoes Are Equal: French Fries Fuel Diabetes, But Mashed and Baked Potatoes Don’t

If you’re eating deep-fried potatoes three times a week, it’s an open invitation to type 2 diabetes.

You Can Now Buy a Humanoid Robot for Under $6,000 – Here’s What It Can Do

The Unitree R1 robot is versatile, with a human-like range of motion.

Volkswagen Wants You to Pay a Subscription to Access All the Car Features

You pay a subscription for Netflix, how about your car?

The disturbing reason why Japan's Olympic athletes wear outfits designed to block infrared

Voyeurism is the last thing we need in sports

Brain Implant Translates Silent Inner Speech into Words, But Critics Raise Fears of Mind Reading Without Consent

To prevent authorized mind reading, the researchers had to devise a "password".