The objective of this research is to substantially improve the productivity of programmers writing applications for petaflop-scale systems by using programmer defined light-weight transactions as the single abstraction for expressing parallelism, delineating communication, reasoning about memory consistency, providing failure recovery, and allowing performance optimization. Transactions as the central abstraction for designing and programming parallel systems leads to a shared memory programming and memory coherence model called Transaction Coherence and Consistency (TCC). Transactions simplify parallel programming by providing a way of writing correct shared-memory programs without threads, locks and semaphores. TCC systems provide high performance communication and synchronization with support for hardware mechanisms that can keep memory coherent and consistent based on programmer-defined transactions. To achieve the research objective, this research program will focus on five activities. First, the researchers will develop new abstractions that use transactions to provide a shared memory programming model that makes it much easier to analyze and optimize application performance. Second, the researchers will develop performance monitoring systems that make use of transactions to detect performance bottlenecks and to provide intuitive feedback to programmers. Third, the researchers will use the transaction based programming model to implement compiler-based static and dynamic feedback-directed optimizations that automatically detect and eliminate performance bottlenecks and extend the scalability of transaction coherency to 105 processors. Fourth, the researchers will use transactions to optimize the performance of parallel storage I/O. Finally, the researchers will develop simulation and emulation technology that will enable us to experiment with petaflop-scale systems that support light-weight transactions before they are available. Broader Impacts The broad impact of this research is to use transaction-based parallel programming to educate and enable a new class of parallel software developers who can implement parallel software with the same facility that sequential software is written today. Enabling parallel software development will be critical to advancing computing performance from desktop applications to large-scale scientific and commercial applications. While parallel processing has been essential for large-scale machines for a while, recent announcements by Intel, AMD and IBM demonstrate that it will soon be critical for desktop applications as well. To educate students, other researchers, and industry about the benefits of transaction-based parallel programming, we will incorporate transactional programming concepts in the parallel programming curriculum and make transaction-based applications available to the wider scientific community. The researchers expect that releasing a suite of optimized transaction-based applications along with simulation technology will be instrumental in encouraging other researchers to experiment with and explore the benefits of transactions. To further promote the use of transaction-based parallel programming we will organize a tutorial or workshop at a major scientific computing conference that will cover the principles and experience of programming with transactions.

Agency
National Science Foundation (NSF)
Institute
Division of Computer and Communication Foundations (CCF)
Type
Standard Grant (Standard)
Application #
0444470
Program Officer
Almadena Y. Chtchelkanova
Project Start
Project End
Budget Start
2004-11-01
Budget End
2007-10-31
Support Year
Fiscal Year
2004
Total Cost
$750,000
Indirect Cost
Name
Stanford University
Department
Type
DUNS #
City
Palo Alto
State
CA
Country
United States
Zip Code
94304