Between 2,500 and 4,000 so-called ‘extremists’ have been killed by drone strikes and kill squads in Pakistan since 2004. From as early as 2007, the NSA has targeted terrorists based on metadata supplied by machine learning program named Skynet. I have no idea who would find naming Skynet a machine designed to list people for assassination a bright idea, but that’s besides the point. The real point is that the inner workings of this software, as revealed in part by Edward Snowden from his leaks, suggest that the program might be targeting innocent people.

MQ-9 Reaper taxiing. Image: Wikimedia Commons

MQ-9 Reaper taxiing. Image: Wikimedia Commons

Ars Technica talked to Patrick Ball, who is a data scientist and the executive director at the Human Rights Data Analysis Group. Judging from how Skynet works, Ball says the machine seems to be scientifically unsound in the way it chooses which people deserve to be on the black list.

skynet

In a nutshell, Skynet works like most Big Data corporate machine learning algorithms. It works by mining the cellular network metadata of 55 million people and assigning a score to each, the highest pointing to terrorist activity. So, based on who you call, how long the call took and how frequent you dial a number, where you are and where you move, Skynet call tell if you’re a terrorist or not. Swapping sim cards or phones will be judged as activity that’s suspiciously linked to terrorist activities. More than 80 different properties, in all, are used by the NSA to build its blacklist.

SKYNET

So, judging from behaviour alone, Skynet is able to build a list of potential terrorists. But will the algorithm return false positives? In one of NSA’s leaked slides from a presentation of Skynet, engineers from the intelligence agency boasted how well the algorithms works by including the highest rated person on the list, Ahmad Zaidan. Thing is, Zaidan isn’t a terrorist but an Al-Jazeera’s long-time bureau chief in Islamabad. As part of the job, Zaidan often meets with terrorists to stage interviews and moves across conflict zones to report. You can see from the slide that Skynet identified Zaidan as a “MEMBER OF AL-QA’IDA.” Of course, no kill squad was sent for Zaidan because he is a known journalist, but one can only wonder about the fate of less notorious figures who had the misfortune to fit “known terrorist” patterns.

According to Ball, the NSA is doing ‘bad science’ by ineffectively training its algorithm. Skynet is  a subset of 100,000 randomly selected people, defined by their phone activity, and a group of seven known terrorists. The NSA scientists feed the algorithms the behaviour of six of the terrorists, then asks Skynet to find the seventh in the pool of 100,000.

“First, there are very few ‘known terrorists’ to use to train and test the model,” Ball said. “If they are using the same records to train the model as they are using to test the model, their assessment of the fit is completely bullshit. The usual practice is to hold some of the data out of the training process so that the test includes records the model has never seen before. Without this step, their classification fit assessment is ridiculously optimistic.”

SKYNET

According to leaked slides, Skynet has a false positive rate of between 0.18 and 0.008%, which sounds pretty good but is actually enough to list thousands for a black list. Nobody knows if the NSA uses a manual triage (it probably does), but the risk of ordering hits on innocent people is definitely on the table.

“We know that the ‘true terrorist’ proportion of the full population is very small,” Ball pointed out. “As Cory [Doctorow] says, if this were not true, we would all be dead already. Therefore a small false positive rate will lead to misidentification of lots of people as terrorists.”

“The larger point,” Ball added, “is that the model will totally overlook ‘true terrorists’ who are statistically different from the ‘true terrorists’ used to train the model.”

“Government uses of big data are inherently different from corporate uses,”  Bruce Schneier, a security guru, told Ars Technica. “The accuracy requirements mean that the same technology doesn’t work. If Google makes a mistake, people see an ad for a car they don’t want to buy. If the government makes a mistake, they kill innocents.”

“On whether the use of SKYNET is a war crime, I defer to lawyers,” Ball said. “It’s bad science, that’s for damn sure, because classification is inherently probabilistic. If you’re going to condemn someone to death, usually we have a ‘beyond a reasonable doubt’ standard, which is not at all the case when you’re talking about people with ‘probable terrorist’ scores anywhere near the threshold. And that’s assuming that the classifier works in the first place, which I doubt because there simply aren’t enough positive cases of known terrorists for the random forest to get a good model of them.”

 

Enjoyed this article? Join 40,000+ subscribers to the ZME Science newsletter. Subscribe now!

Like us on Facebook