Feature Story | 24-Jun-2021

Disaster response and mitigation in an AI world

PNNL combines AI, cloud computing, with damage assessment tool to predict path of wildfires and evaluate impact of natural disasters

DOE/Pacific Northwest National Laboratory

After the destructive California wildfires of 2019, the U.S. government put together a White House Executive forum to develop better ways of protecting the nation and key infrastructure, such as the power grid, from wildfires and other disasters. In 2020 alone, more than 10.3 million acres burned across the United States, a level three times higher than the 1990-2000 10-year average. Between fire suppression costs, direct and indirect costs, wildfires in 2020 cost the United States upwards of $170 billion. Add in floods, hurricanes, and other natural disasters, and the toll of disasters on the livelihoods of Americans is astronomical.

Andre Coleman and his team of researchers at Pacific Northwest National Laboratory (PNNL) are part of the First Five Consortium, a group of government, industry, and academia experts committed to lessening the impact of natural disasters using technology. Coleman and team are expanding PNNL's operational Rapid Analytics for Disaster Response (RADR) image analytics and modeling suite to mitigate damage to key energy infrastructure. Using a combination of image capturing technology (satellite, airborne, and drone images), artificial intelligence (AI), and cloud computing, Coleman and the team work to not only assess damage but predict it as well.

Accurately forecasting the movement of natural disasters--wildfires, floods, hurricanes, windstorms, tornados, and earthquakes--gives first responders a jump, allowing them to take measures to reduce damage, conduct advanced resource planning, and increase infrastructure restoration time. For example, should a fire reach an electrical substation or other grid infrastructure, an entire community--homes, businesses, and schools--would experience a power outage that could take days to restore.

"This is an exciting and timely effort to apply artificial intelligence to reduce the impact of wildfires, protect energy infrastructure, and ultimately save lives," said Pamela Isom, acting director of the U.S. Department of Energy (DOE) Artificial Intelligence and Technology Office. "The work has the potential to make a difference in what we expect will be a very challenging wildfire season. This has been a very productive collaboration among several partners, including our colleagues at the Department of Defense's Joint Artificial Intelligence Center, Department of Homeland Security, and at PNNL."

Since 2014, Coleman and team have been working with these technologies. The project originally started with the creation of a change-detection algorithm, which analyzes different types of satellite imagery and determines what changed in the landscape after a storm. Authorities use the tool to rapidly assess the physical damage impact of natural disasters, often before ground teams can get in. The first iteration of the tool was used during the 2016 hurricane season to evaluate hurricane damage and determine if energy infrastructure--electric grid, petroleum, and gas facilities--was damaged or at risk.

Overall, RADR analytical products bring value, but Coleman and the team recognize opportunities to expand the functionality of the tools and seek to improve RADR response time, damage assessment, visibility, prediction capability, and data accessibility.

To improve timeliness and ground-level assessments, the team incorporated new and different image sources. RADR can pull in images from a variety of satellites with different sensing capabilities, including domestic and international government satellites that are offered as open data as well as commercial satellites that are available through the International Disasters Charter. Having multiple sources of overhead imagery improves response time to just a few hours with the key limitation being the latency of overhead imagery, or the time between images being collected and being available for analysis. Once imagery is received, the RADR software can generate an analysis in just over 10 minutes.

To peer through wildfire smoke and cloud cover, the team added infrared imagery to RADR. The new capability provides a clearer view of the landscape that was previously unavailable, giving responders information such as damage to key infrastructure or a safe location to set up relief efforts that responders may not have otherwise been privy to.

The team is also integrating publicly available and crowdsourced images from social media. Often in a disaster, social media networks like Twitter, Flickr, and Instagram offer a wealth of real-time data as users post pictures of what is going on around them. By pairing overhead imagery with on-the-ground images, the team can provide a more complete assessment. Satellite images, for example, may show damage to a generation resource, power lines, or the electric grid; however, ground images may indicate otherwise. The tool takes all these images, removes the redundant ones, and sews the images together to provide a more accurate view of changing conditions.

As with any computational model, it's only as good as the data. The added imagery sources provide additional data for RADR to interpret, improving accuracy. To predict possible outcomes of a wildfire, the team is combining the imagery analysis with weather, fuel, and forecast data. For example, wind, vegetation, and anything a fire can consume all factor into the size of a fire and the direction it takes. By marrying imagery with fuel data and wildfire models, the team hopes to be able to accurately predict the path a fire takes.

Of course, the assessments need to get in the right hands. Coordinating a response requires local, regional, and national resources, each in different locations but needing the data as quickly as possible in a format that can be readily accessed and interpreted, particularly in a data communication constrained environment. A cloud-based system provides an end-to-end pipeline for retrieving available imagery, processing the analytics, and disseminating data to be used directly in a user's own software, through desktop web browsers, and/or via mobile applications. Added visual analytics produce images and datasets that can be easily discernable to a wide audience of responders.

Recent years have brought an increase in the frequency and severity of wildfires, floods, and other extreme weather events. Coleman and team hope that at the very least the added capabilities of RADR will give responders information that can be used to make informed decisions, reduce or plan for damage to key energy infrastructure, plan relief efforts, and save lives.

###

By Greg Kunkel

This work was sponsored by the U.S. Department of Homeland Security, DOE's Office of Cybersecurity, Energy Security, and Emergency Response, the U.S. Department of Defense's Joint Artificial Intelligence Center (JAIC), DOE's Artificial Intelligence and Technology Office, and by PNNL.

Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.