Early detection of wildfires is critical to mounting a successful response, and a growing need given trends in both urbanization and climate change. As recently as a few months ago, large wildfires started in inaccessible unmonitored areas threatened large urban areas for weeks (witness the Station Fire in the Angels National Forest). Manned observation towers, the method of choice in decades past, has become unsustainable with the boundaries of urban sprawl growing and the budgets of local governments under strain.

This project tackles the problem head-on by developing algorithmic and engineering tools for remote detection of incipient fires using networked remote optical sensors in the visible and infra-red spectra. While previous efforts suggested blanketing the target area with networked temperature and smoke sensors, this approach does not scale well because it requires sensors to be close to the source in order to trigger an alarm. Remote sensors can detect events at a distance and are only limited by the topography of the environment. Thus one strategically placed camera can monitor an entire valley and would ultimately be suitable for co-deployment with other infrastructure such as cell towers. However, processing these video streams is not trivial since events of interest, such as the inception of a fire, can manifest themselves in a large number of ways depending on time of the day, season, weather, distance from the sensor, fuel etc. The challenge is to tease apart these "nuisance effects" and only detect events of interest. The team will focus on the algorithmic challenge of inferring spatio-temporal events in video streams, and on the systems trade-offs between computation, communication and energy resources.

Project Report

This project was aimed at developing methods and systems to automatically detect wildfires in video. The purpose was to aid the monitoring of remote wildfire, once done by trained human operators manning remote monitoring stations, by automatically detecting the onset of a remote fire with the smallest possible latency. The challenge in this problem is that the normal mode variability spans multiple time scales (images change with the time of the day, the day of the season, and the weather), and the event of interest (the onset of a wildfilre) can manifest itself in a large variety of ways (at different distances, positions, color, speed, partial occlusion etc.) for which no sufficient training sample is available. The project produced a general methodology to determine what variability can be eliminated from the data at the outset in pre-processing, what variability can be learned with extended observation, and what variability can be modeled and inferred directly. The result is that, by modeling known photometric variability, for instance, it is not necessary to wait long periods of time (years) to learn all the seasonal and weather variation. Instead, photometric and geometric variability is modeled, and only weather variability is learned. The methodology has produced algorithms for detecting wildfires that have been benchmarked against trained and untrained humans and have outperformed both. It has also shown how the same tools can be used for other anomaly detection tasks like traffic and crowd monitoring. Finally, it has produced a system that is currently on-line, monitoring fire risk from the live feed of monitoring stations in the San Diego National Forest.

Project Start
Project End
Budget Start
2010-04-15
Budget End
2013-03-31
Support Year
Fiscal Year
2009
Total Cost
$300,000
Indirect Cost
Name
University of California Los Angeles
Department
Type
DUNS #
City
Los Angeles
State
CA
Country
United States
Zip Code
90095