This project develops a theory for characterizing the performance of parallel data structures and parallel algorithms that use parallel structures. Standard metrics for parallel algorithms, such as "work" (total amount of computation) and "span" (critical-path length), do not naturally generalize in the presence of contention on shared data. Moreover, standard approaches for analyzing sequential data structures, such as amortization, do not seem to generalize when data structures are parallel, in part because the performance depends on the properties of the underlying parallel task schedulers.

The specific research goals are as follows: (1) Investigate a methodology for designing and analyzing parallel algorithms that use data structures, especially amortized ones. (2) Design parallel schedulers that ameliorate the contention on parallel data structures. (3) Design parallel data structures that perform provably well with these schedulers.

Today parallel computing is ubiquitous. Modern computation platforms---smartphones to network routers, personal computers to large clusters and clouds---each contain multiple processors. Writing parallel code that provably scales well is challenging and techniques for analyzing sequential algorithms and data structures generally do not apply to parallel code. This project will develop a theoretical foundation for characterizing the scalability of parallel programs that contend for access to shared data.

Project Start
Project End
Budget Start
2012-08-01
Budget End
2016-07-31
Support Year
Fiscal Year
2012
Total Cost
$193,946
Indirect Cost
Name
Washington University
Department
Type
DUNS #
City
Saint Louis
State
MO
Country
United States
Zip Code
63130