ZME Science
No Result
View All Result
ZME Science
No Result
View All Result
ZME Science

Home → Research → Technology

Does AI Have Free Will? This Philosopher Thinks So

As AI systems grow more autonomous, we should start treating them like moral agents.

Mihai AndreibyMihai Andrei
May 13, 2025
in Future, News, Technology
A A
Edited and reviewed by Zoe Gordon
Share on FacebookShare on TwitterSubmit to Reddit

“I’ve been interested in the topic of free will for a while,” Frank Martela tells me. Martela is a philosopher and researcher of psychology at Aalto University, in Finland. His work revolves around the fundamentals of the human condition and the perpetual philosophical question what makes a good life? But his work on humans took a detour to look at artificial intelligence (AI).

“I was following stories about the latest developments in large language models, it suddenly came to my mind that they actually fulfill the three conditions for free will.”

Associate Professor Frank Martela from Aalto University. Image credits: Nita Vera / Aalto University.

Martela’s latest study draws on the concept of functional free will.

Functional free will is a term that attempts to reconcile the age-old debate between determinism and free agency. It does this not by answering whether we are “truly free” in an absolute sense, but by reframing the question around how free will works in practice, especially in biological and psychological systems.

“It means that if we can’t explain somebody’s behavior without assuming that they have free will, then that somebody has free will. In other words, if we observe something (a human, an animal, a machine) ‘from the outside’ and must assume that it makes free choices to be able to understand its behavior, then that something has free will.”

Does AI have free will?

Martela argues that functional free will is the best way to go about it, because we can’t really ever observe anything “from the inside.” He builds on the work of philosopher Christian List, who frames free will as a three-part capacity involving:

  • intentional agency, meaning their actions stem from deliberate intentions rather than being reflexive or accidental.
  • alternative possibilities, having access to more than one course of action in meaningful situations. This doesn’t require escaping causality but having internal mechanisms (like deliberation and foresight) that allow for multiple real options
  • and causal control meaning their actions are not random or externally coerced, but are caused by their own states or intentions.

“If something meets all three conditions, then we can’t but conclude that it has free will,” Martela tells ZME Science.

RelatedPosts

Japan wants rockets with artificial intelligence
This free AI tool restores old and deteriorated photos in an instant
How AI is impacting the video game industry
AI Would Obliterate the Nazi’s WWII Enigma Code in Minutes—Here’s Why That Matters Today
Does AI have free will? AI-generated image.

The new study examined two generative AI agents powered by large language models (LLMs): the Voyager agent in Minecraft and fictional killer drones with the cognitive function of today’s unmanned aerial vehicles.

‘Both seem to meet all three conditions of free will — for the latest generation of AI agents we need to assume they have free will if we want to understand how they work and be able to predict their behaviour,’ says Martela. He adds that these case studies are broadly applicable to currently available generative agents using LLMs.

Why does this matter?

Defining free will is far from a settled question. Philosophers have argued about it for centuries and will likely continue to do so for centuries. But this study has very practical significance.

“It makes it more possible to blame AI for what it has done, and teach it to correct its behavior. But it does not free the developer from responsibility. Similarly, if a dog attacks a child, we blame the dog for bad behavior and try to teach it to not attack people. However, this does not free the dog-owner from responsibility. They must either teach the dog to behave or make sure it does not end up in situations where it can misbehave. The same applies for AI drones. We can blame the drone but the developer still carries the main responsibility.”

The “dog” in this case (the AI) is becoming more and more powerful. We’re using it to make medical diagnoses, screen job applicants, guide autonomous vehicles, determine creditworthiness, and even assist in military targeting decisions — tasks that carry significant ethical weight and demand accountability.

Martela believes we should give AI a moral compass. It takes children years to learn how to behave, and it doesn’t always work. “It isn’t any easier to teach AI and thus it takes considerable effort to teach them all the relevant moral principles so they would behave in the right way,” the researcher adds.

AI has no moral compass unless it is programmed to have one. But the more freedom you give it, the more you need to know it has moral values.

Companies are already imparting moral values to AI

Companies are already working on this in some ways. They teach models which responses are not allowed (i.e. harmful or racist) and what knowledge they should not share (i.e. how to make a bomb). They also have a measure of how friendly and responsive they should be. The latest version of ChatGPT was withdrawn because it had sycophantic tendencies. It was too eager to please; something in its moral compass was off.

“So, they are already programming a lot of behavioral guidelines and rules into their LLM models that guide them to behave in certain ways. What the developers need to understand is that what they are in effect doing is [teaching] moral rules to the AI, and [they must] take full responsibility for the kind of rules they teach them.”

By instructing AI how to behave, developers are imparting their own companies’ moral values to the AI. This risks embedding narrow, biased, or culturally specific moral frameworks into technologies that will operate across diverse societies and affect millions of lives. When developers — often a small, homogeneous group — teach AI how to “behave,” they are not just writing code; they are effectively encoding ethical judgments that may go unquestioned once embedded. We’re essentially having tech companies impart their own values on tools that will shape society.

Without a deep understanding of moral philosophy and pluralistic ethics, there’s a real danger that AI systems will perpetuate one group’s values while ignoring or marginalizing others. That’s why it’s important to give AI its own, proper, moral compass.

Journal Reference: 10.1007/s43681-025-00740-6

Tags: AI behaviorAI developmentAI ethicsAI responsibilityartificial intelligencefree willfunctional free willLLMsmoral philosophytech ethics

ShareTweetShare
Mihai Andrei

Mihai Andrei

Dr. Andrei Mihai is a geophysicist and founder of ZME Science. He has a Ph.D. in geophysics and archaeology and has completed courses from prestigious universities (with programs ranging from climate and astronomy to chemistry and geology). He is passionate about making research more accessible to everyone and communicating news and features to a broad audience.

Related Posts

History

AI Would Obliterate the Nazi’s WWII Enigma Code in Minutes—Here’s Why That Matters Today

byTudor Tarita
3 days ago
Future

This Chip Trains AI Using Only Light — And It’s a Game Changer

byMihai Andrei
1 week ago
blocky image of minecraft
Future

An AI Called Dreamer Learned to Mine Diamonds in Minecraft — Without Being Taught

byTudor Tarita
1 month ago
Future

UK Is Testing a “Murder Prediction” tool—and It’s Seriously Alarming

byMihai Andrei
1 month ago

Recent news

Scorpion Stings Are Surging in Brazil with Sting Rates Rising 155%

May 13, 2025

Researchers Used 3D Tech to Rebuild the Parthenon’s Lighting and Discovered It Was Nothing Like We Imagined

May 13, 2025

Cells Might Be Doing Quantum Computing. Life on Earth Has Performed 10⁶⁰ Logical Operations

May 13, 2025
  • About
  • Advertise
  • Editorial Policy
  • Privacy Policy and Terms of Use
  • How we review products
  • Contact

© 2007-2025 ZME Science - Not exactly rocket science. All Rights Reserved.

No Result
View All Result
  • Science News
  • Environment
  • Health
  • Space
  • Future
  • Features
    • Natural Sciences
    • Physics
      • Matter and Energy
      • Quantum Mechanics
      • Thermodynamics
    • Chemistry
      • Periodic Table
      • Applied Chemistry
      • Materials
      • Physical Chemistry
    • Biology
      • Anatomy
      • Biochemistry
      • Ecology
      • Genetics
      • Microbiology
      • Plants and Fungi
    • Geology and Paleontology
      • Planet Earth
      • Earth Dynamics
      • Rocks and Minerals
      • Volcanoes
      • Dinosaurs
      • Fossils
    • Animals
      • Mammals
      • Birds
      • Fish
      • Amphibians
      • Reptiles
      • Invertebrates
      • Pets
      • Conservation
      • Animal facts
    • Climate and Weather
      • Climate change
      • Weather and atmosphere
    • Health
      • Drugs
      • Diseases and Conditions
      • Human Body
      • Mind and Brain
      • Food and Nutrition
      • Wellness
    • History and Humanities
      • Anthropology
      • Archaeology
      • History
      • Economics
      • People
      • Sociology
    • Space & Astronomy
      • The Solar System
      • Sun
      • The Moon
      • Planets
      • Asteroids, meteors & comets
      • Astronomy
      • Astrophysics
      • Cosmology
      • Exoplanets & Alien Life
      • Spaceflight and Exploration
    • Technology
      • Computer Science & IT
      • Engineering
      • Inventions
      • Sustainability
      • Renewable Energy
      • Green Living
    • Culture
    • Resources
  • Videos
  • Reviews
  • About Us
    • About
    • The Team
    • Advertise
    • Contribute
    • Editorial policy
    • Privacy Policy
    • Contact

© 2007-2025 ZME Science - Not exactly rocket science. All Rights Reserved.