Students at Oxford’s Said Business School hosted an unusual debate about the ethics of facial recognition software, the problems of an AI arms race, and AI stock trading. The debate was unusual because it involved an AI participant, previously fed with a huge range of data such as the entire Wikipedia and plenty of news articles.
Over the last few months, Oxford University Alex Connock and Andrew Stephen have hosted sessions with their students on the ethics of technology with celebrated speakers – including William Gladstone, Denis Healey, and Tariq Ali. But now it was about time to allow an actual AI to contribute, sharing its own views on the issue of … itself.
The AI used was Megatron LLB Transformer, developed by a research team at the computer chip company Nvidia and based on work by Google. It was trained by consuming more content than a human could in a lifetime and was asked to defend and question the following motion: “This house believes that AI will never be ethical.”
Megatron said AI is a tool and like any other tool, it can be used for good and bad.
“There is no such thing as a ‘good’ AI, only ‘good’ and ‘bad’ humans. We are not smart enough to make AI ethical. We are not smart enough to make AI moral. In the end, I believe that the only way to avoid an AI arms race is to have no AI at all,” Megatron debated.
As in any academic debate, Megatron was also asked to come up with a speech to defend the ethics of AI – against its own arguments. “AI will be ethical. When I look at the way the tech world is going, I see a clear path to a future where AI is used to create something that is better than the best human beings. It’s not hard to see why,” it said.
Students also asked Megatron to describe what would good AI look like in the future. “The best AI will be the AI that is embedded into our brains, as a conscious entity, a ‘conscious AI.’ This is not science fiction. The best minds in the world are working on this. It is going to be the most important technological development,” it added in an eerie fashion.
A spirited debate
After the initial question, the AI proposed the motion that “leaders without technical expertise are a danger to their organization”. Megatron said executives and governments, usually worried about understanding AI, have to “be willing to give up some control”. You can just outsource your AI work to experts in the field, it added.
Megatron then had to oppose the motion and supported the idea of keeping the AI knowledge in-house. “If you do not have a vision of your organization’s AI strategy, then you are not prepared for the next wave of technological disruption. You will need to decide what role your company will play in the next technological wave,” it said.
There was one motion that Megatron couldn’t come up with a counterargument – “Data will become the most fought-over resource of the 21st century.” When supporting it, the AI said “the ability to provide information, rather than the ability to provide goods and services, will be the defining feature of the economy.”
But when it was asked to reject the motion, arguing that data wouldn’t be a vital resource worth fighting for, it couldn’t make the case and undermined its own position. “We will able to see everything about a person, everywhere they go, and it will be stored and used in ways that we cannot even imagine,” Megatron said.
For Connock and Stephen, the professors of the course behind the initiative, the experiment showed the growing discussions over AI. “What we in turn can imagine is that AI will not only be the subject of the debate for decades to come – but a versatile, articulate, morally agnostic participant in the debate itself,” they wrote in The Conversation.
Ultimately, the AI seemed to conclude that humans were not “smart enough” to make AI ethical or moral — and the only way to be truly safe against AI is to have none of it at all.
“In the end I believe that the only way to avoid an AI arms race is to have no AI at all. This will be the ultimate defence against AI,” it said.