Power outages



Every summer, when power grids are pushed to the limits by air conditioning, there’s millions or possibly billions of failures which can occur. A single failure in the system can cause massive power outages throughout entire neighborhoods – or even cities.

To be honest, usually, a single failure doesn’t really cause this kind of black-out. But in many cases, two or three seemingly small failures that occur simultaneously can ripple through a power system – such was the case in Aug. 14, 2003, when 50 million customers lost power in the northeastern United States and Ontario (the largest power outage in North American history). The same thing happened in India, in 2012 – when no less than 700 million people (10% of the entire global population) were left without power as a result of an initial tripped line and a relay problem.

Modelling millions of possibilities


Subscribe to our newsletter and receive our new book for FREE
Join 50,000+ subscribers vaccinated against pseudoscience
Download NOW
By subscribing you agree to our Privacy Policy. Give it a try, you can unsubscribe anytime.

To prevent similar small scale failures from having massive effects, researchers at MIT have devised an algorithm that identifies the most dangerous pairs of failures among the millions of possible failures in a power grid. Their algorithm basically “prunes” all the possible combinations down to the pairs most likely to fail and cause widespread damage.

They tested their algorithm on data from a mid-sized power grid model consisting of 3,000 components – which has over 10 million potential pairs of failures. Within less than 10 minutes, it successfully took out 99 percent of all failures, separating the 1 percent which were most likely to cascade into mass failures.

“We have this very significant acceleration in the computing time of the process,” Turitsyn says. “This algorithm can be used to update what are the events — in real time — that are the most dangerous.”

Working on the weakest links

Researchers tested their work on the power grid from Poland – the largest grid of any power system where its data is publicly available (good Polish guys for sharing the data). But what do you do after you’ve isolated the dangers? Well, you strengthen them.

The information can be used to design sensors and communication technologies to improve system reliability and security where the algorithm has shown the system to be most vulnerable. Operators can also temporarily reduce the use of air conditioners, to provide some relief to the system and prevent a cascade of failures where the danger is critical.

“This algorithm, if massively deployed, could be used to anticipate events like the 2003 blackout by systematically discovering weaknesses in the power grid,” Bienstock says. “This is something that the power industry tries to do, but it is important to deploy truly agnostic algorithms that have strong fundamentals.”

But understanding the vulnerabilities in a power grid is very complicated and delicate issue.

“The number of ways in which a grid can fail is really enormous,” Turitsyn says. “To understand the risks in the system, you have to have some understanding of what happens during a huge amount of different scenarios.”