This project develops a theory for characterizing the performance of parallel data structures and parallel algorithms that use parallel structures. Standard metrics for parallel algorithms, such as "work" (total amount of computation) and "span" (critical-path length), do not naturally generalize in the presence of contention on shared data. Moreover, standard approaches for analyzing sequential data structures, such as amortization, do not seem to generalize when data structures are parallel, in part because the performance depends on the properties of the underlying parallel task schedulers.

The specific research goals are as follows: (1) Investigate a methodology for designing and analyzing parallel algorithms that use data structures, especially amortized ones. (2) Design parallel schedulers that ameliorate the contention on parallel data structures. (3) Design parallel data structures that perform provably well with these schedulers.

Today parallel computing is ubiquitous. Modern computation platforms---smartphones to network routers, personal computers to large clusters and clouds---each contain multiple processors. Writing parallel code that provably scales well is challenging, and techniques for analyzing sequential algorithms and data structures generally do not apply to parallel code. This project will develop a theoretical foundation for characterizing the scalability of parallel programs that contend for access to shared data.

Agency
National Science Foundation (NSF)
Institute
Division of Computer and Communication Foundations (CCF)
Type
Standard Grant (Standard)
Application #
1218188
Program Officer
Tracy J. Kimbrel
Project Start
Project End
Budget Start
2012-08-01
Budget End
2015-07-31
Support Year
Fiscal Year
2012
Total Cost
$138,994
Indirect Cost
Name
Georgetown University
Department
Type
DUNS #
City
Washington
State
DC
Country
United States
Zip Code
20057