Debating, held in high regard since the time of the Ancient Greeks (and even before that), has a new participant. It’s not quite as eloquent and sharp as the likes of Socrates or Cicero, but it can hold its own against some debaters — hinting at a future where AI can understand and formulate complex arguments with ease.
An autonomous debating system
In 2019, an unusual debate was held in San Francisco. The topic of the debate was “We should subsidize preschool”, and it featured Harish Natarajan, a 2016 World Debating Championships Grand Finalist and 2012 European Debate Champion. His opponent was Project Debater, an autonomous debating system.
The structure of the debate was simple. Noam Slonim, an IBM researcher in Israel, explains how it worked: a four-minute opening statement, a four-minute rebuttal, and a two-minute summary.
“The speech by Harish was captured via Watson’s Speech to Text in real-time, which was then ingested by our algorithms in the Cloud to build the rebuttal, which took under a minute,” Slonim explains.
Both contestants had about 15 minutes to prepare, which for Project Debater meant scouring its database for relevant arguments, although the topic of this debate was never included in the training data of the system, Slonim emphasizes.
“We polled our live audience of around 800 attendees before and after the debate and then calculated the difference to see how many were persuaded to the other side,” he notes.
The AI, it turns out, isn’t able to stand up to the world’s best debaters yet, but it may be able to defeat the less prepared, and it could hold its own against even some experienced debaters. Its growth is also impressive: from zero to the current performance in a couple of years.
“As the system matured it was very similar to watching a junior level debater grow up in front of your eyes,” Slonim tells me in an email, his satisfaction betrayed by a smiling emoji. “In 2016, during the first live debates we had with the system, and after nearly 4 years of research, it was still performing at the level of a toddler and was not making a lot of sense. Only three years later, it seems fair to say that the system achieved the performance of a decent university-level debater. So, from kindergarden to university in only three years, which was interesting to observe.”
Slonim and collaborators went on to host several live debates which confirmed the AI’s capability, showing that non-human debaters are ready to enter the stage. But the impact of their work goes way beyond that.
AI stops playing games
Artificial Intelligence algorithms can already do a lot of things, but debating (or analyzing complex arguments) is one of the fields considered to be AI-proof.
“The study of arguments has an academic pedigree stretching back to the ancient Greeks, and spans disciplines from theoretical philosophy to computational engineering. Developing computer systems that can recognize arguments in natural human language is one of the most demanding challenges in the field of artificial intelligence (AI),” writes Chris Reed in a News and Views article that accompanied the study.
Since the 1950s, AI research has greatly progressed, being able to compete against humans in a number of games. First, algorithms conquered chess, and more recently, they even conquered the game of Go, thought to be impossible until recently.
But in the new paper, Slonim and colleagues argue that all these games lie within the ‘comfort zone’ of AI, based on several simple observations. Debates are a whole new ballgame.
“First, in games there is a clear definition of a winner, facilitating the use of reinforcement learning techniques. Second, in games, individual game moves are clearly defined, and the value of such moves can often be quantified objectively, enabling the use of game-solving techniques. Third, while playing a game an AI system may come up with any tactic to ensure winning, even if the associated moves could not be easily interpreted by humans. Finally, for many AI grand challenges, massive amounts of relevant data – e.g., in the form of complete games played by humans – was available for the development of the system.”
“All these four characteristics do not hold for competitive debates. Thus, the challenge taken by Project Debater seems to reside outside the AI comfort zone, in a territory where humans still prevail, and new paradigms are needed to make substantial progress.”
To overcome these challenges, Project Debate scans through an archive of 400 million newspaper articles and Wikipedia pages, looking to form opening statements and counter-arguments. It’s able to debate on varied different topics, scoring high on opening statements.
While the authors conclude that debating humans is still out of the AI comfort zone, it’s an important proof of concept — and once again, AIs are ready to rise up to new challenges.
These new challenges, researchers say, could be quite important.
Taking things beyond debates
The broad goal of this new AI was to help people make unbiased, informed decisions. As is often the case with pioneering AI algorithms, though, the scope reaches beyond what has already been accomplished. In this case, an AI that can present arguments and counter-arguments is very useful as an adviser.
“Whether you are a politician or a CEO you are likely to make decisions based on instinct and experience, which may be vulnerable to blind spots or a bias. So the question is, what if AI could help you see data to eliminate or reduce the bias? You still may ultimately make the same decision, but at least you are better informed about other opinions. This also addresses the echo chamber or social media bubble challenge that we see currently, particularly around the COVID vaccine and whether people should get it or not,” Slonim says.
Already, the technology is being put to work. The Project Debater API was made freely available for academic use, including two modules called Narrative Generation and Key Point Analysis.
“When given a set of arguments, Narrative Generation constructs a well-structured speech that supports or contests a given topic, according to the specified polarity. And Key Point Analysis is a new and promising approach for summarization, with an important quantitative angle. This service summarizes a collection of comments on a given topic as a small set of key points, and the prominence of each key point is given by the number of its matching sentences in the given data,” he explains.
A lot of ideas can build on this existing model. Already, technologies built by the Project Debater team were recently used on a TV show called “That’s Debatable” and this week it’s being used during the Grammy Awards to allow fans to debate on pop culture topics, Slonim tells me
The idea that you can comb through thousands of arguments made by other people and compile and summarize can be very useful in a number of scenarios. The approach can also eliminate bias, or at least, reduce it to the bias present in the voice of the crowd.
“Think of a company that would like to collect feedback about a service or a product from thousands of clients; about an employer who would like to learn the opinions of thousands of employees; or a government, who would like to hear the voice of the citizens about a policy being examined. In all these cases, by analyzing people’s opinions, our technology can establish a unique and effective communication channel between the decision-maker, and the people that might be impacted by the decision.”
Project Debater is a crucial step in the development of argument technology, and given the deluge of misinformation we’re faced with on a daily basis, it couldn’t come soon enough.
“Project Debater tackles a grand challenge that acts mainly as a rallying cry for research, it also represents an advance towards AI that can contribute to human reasoning,” Reed concludes in the News & Views article.