This project anticipates significantly speeding up prediction of dust storms. The PIs have successful preliminary work in the area in collaboration with relevant domain experts at federal agencies such as NOAA and FEMA.
The major challenges in this project are geospatial interoperability and format heterogeneity across data sources. The PIs speed up collection of high-volume continuous sensor data, run dust-storm prediction models, and disseminate detailed prediction data to users such as emergency managers. The approach to speeding-up the first task is based on reduction of data via data differencing. Challenges in choice of data differencing schemes include client-server load-balancing, diversity of geospatial data types (e.g. raster, vector), and possibility of errors/missing data in some snapshots, which may be magnified via difference techniques leading to errors in dust forecast. Speed up for prediction models is done by customizing caching and computation job scheduling to dust prediction models. For example, a location?s potential dust storm is predicted from properties of nearby places based on wind direction etc. To speed-up result dissemination, the project studies geospatial data access patterns and develops custom pre-fetching techniques to be able to serve a large number of concurrent users.
The results of this project will benefit society by improving forecasting of dust storms. Custom techniques for data difference, caching, job-scheduling and pre-fetching may also benefit other societal applications such as weather prediction. It will also lead to curriculum development as well as training of graduate and undergraduate students.