Imagine that a soldier has a tiny computer device injected into their bloodstream that can be guided with a magnet to specific regions of their brain. With training, the soldier could then control weapon systems thousands of miles away using their thoughts alone. Embedding a similar type of computer in a soldier’s brain could suppress their fear and anxiety, allowing them to carry out combat missions more efficiently. Going one step further, a device equipped with an artificial intelligence system could directly control a soldier’s behavior by predicting what options they would choose in their current situation.
While these examples may sound like science fiction, the science to develop neurotechnologies like these is already in development. Brain-computer interfaces, or BCI, are technologies that decode and transmit brain signals to an external device to carry out a desired action. Basically, a user would only need to think about what they want to do, and a computer would do it for them.
BCIs are currently being tested in people with severe neuromuscular disorders to help them recover everyday functions like communication and mobility. For example, patients can turn on a light switch by visualizing the action and having a BCI decode their brain signals and transmit it to the switch. Likewise, patients can focus on specific letters, words or phrases on a computer screen that a BCI can move a cursor to select.
However, ethical considerations have not kept pace with the science. While ethicists have pressed for more ethical inquiry into neural modification in general, many practical questions around brain-computer interfaces have not been fully considered. For example, do the benefits of BCI outweigh the substantial risks of brain hacking, information theft and behavior control? Should BCI be used to curb or enhance specific emotions? What effect would BCIs have on the moral agency, personal identity and mental health of their users?
These questions are of great interest to us, a philosopher and neurosurgeon who study the ethics and science of current and future BCI applications. Considering the ethics of using this technology before it is implemented could prevent its potential harm. We argue that responsible use of BCI requires safeguarding people’s ability to function in a range of ways that are considered central to being human.
Expanding BCI beyond the clinic
Researchers are exploring nonmedical brain-computer interface applications in many fields, including gaming, virtual reality, artistic performance, warfare and air traffic control.
For example, Neuralink, a company co-founded by Elon Musk, is developing a brain implant for healthy people to potentially communicate wirelessly with anyone with a similar implant and computer setup.
In 2018, the U.S. military’s Defense Advanced Research Projects Agency launched a program to develop “a safe, portable neural interface system capable of reading from and writing to multiple points in the brain at once.” Its aim is to produce nonsurgical BCI for able-bodied service members for national security applications by 2050. For example, a soldier in a special forces unit could use BCI to send and receive thoughts with a fellow soldier and unit commander, a form of direct three-way communication that would enable real-time updates and more rapid response to threats.
To our knowledge, these projects have not opened a public discussion about the ethics of these technologies. While the U.S. military acknowledges that “negative public and social perceptions will need to be overcome” to successfully implement BCI, practical ethical guidelines are needed to better evaluate proposed neurotechnologies before deploying them.
One approach to tackling the ethical questions BCI raises is utilitarian. Utilitarianism is an ethical theory that strives to maximize the happiness or well-being of everyone affected by an action or policy.
Enhancing soldiers might create the greatest good by improving a nation’s warfighting abilities, protecting military assets by keeping soldiers remote, and maintaining military readiness. Utilitarian defenders of neuroenhancement argue that emergent technologies like BCI are morally equivalent to other widely accepted forms of brain enhancement. For example, stimulants like caffeine can improve the brain’s processing speed and may improve memory.
However, some worry that utilitarian approaches to BCI have moral blind spots. In contrast to medical applications designed to help patients, military applications are designed to help a nation win wars. In the process, BCI may ride roughshod over individual rights, such as the right to be mentally and emotionally healthy.
For example, soldiers operating drone weaponry in remote warfare today report higher levels of emotional distress, post-traumatic stress disorder and broken marriages compared to soldiers on the ground. Of course, soldiers routinely elect to sacrifice for the greater good. But if neuroenhancing becomes a job requirement, it could raise unique concerns about coercion.
Another approach to the ethics of BCI, neurorights, prioritizes certain ethical values even if doing so does not maximize overall well-being.
Proponents of neurorights champion individuals’ rights to cognitive liberty, mental privacy, mental integrity and psychological continuity. A right to cognitive liberty might bar unreasonable interference with a person’s mental state. A right to mental privacy might require ensuring a protected mental space, while a right to mental integrity would prohibit specific harms to a person’s mental states. Lastly, a right to psychological continuity might protect a person’s ability to maintain a coherent sense of themselves over time.
BCIs could interfere with neurorights in a variety of ways. For example, if a BCI tampers with how the world seems to a user, they might not be able to distinguish their own thoughts or emotions from altered versions of themselves. This may violate neurorights like mental privacy or mental integrity.
Yet soldiers already forfeit similar rights. For example, the U.S. military is allowed to restrict soldiers’ free speech and free exercise of religion in ways that are not typically applied to the general public. Would infringing neurorights be any different?
A human capability approach insists that safeguarding certain human capabilities is crucial to protecting human dignity. While neurorights home in on an individual’s capacity to think, a capability view considers a broader range of what people can do and be, such as the ability to be emotionally and physically healthy, move freely from place to place, relate with others and nature, exercise the senses and imagination, feel and express emotions, play and recreate, and regulate the immediate environment.
We find a capability approach compelling because it gives a more robust picture of humanness and respect for human dignity. Drawing on this view, we have argued that proposed BCI applications must reasonably protect all of a user’s central capabilities at a minimal threshold. BCI designed to enhance capabilities beyond average human capacities would need to be deployed in ways that realize the user’s goals, not just other people’s.
For example, a bidirectional BCI that not only extracts and processes brain signals but delivers somatosensory feedback, such as sensations of pressure or temperature, back to the user would pose unreasonable risks if it disrupts a user’s ability to trust their own senses. Likewise, any technology, including BCIs, that controls a user’s movements would infringe on their dignity if it does not allow the user some ability to override it.
A limitation of a capability view is that it can be difficult to define what counts as a threshold capability. The view does not describe which new capabilities are worth pursuing. Yet, neuroenhancement could alter what is considered a standard threshold, and could eventually introduce entirely new human capabilities. Addressing this requires supplementing a capability approach with a fuller ethical analysis designed to answer these questions.
Nancy S. Jecker, Professor of Bioethics and Humanities, School of Medicine, University of Washington and Andrew Ko, Assistant Professor of Neurological Surgery, University of Washington
This article is republished from The Conversation under a Creative Commons license. Read the original article.