Within a few years, every laptop/desktop/server processor will be a multi-core machine. For an application to perform well, it must be efficiently parallelized to execute on all the cores on the machine. Chip manufacturers must therefore provide architectures that make it convenient for programmers to partition an application into multiple parallel threads. The Transactional Memory (TM) programming model is widely acknowledged to be the best known model for concurrency: it eliminates deadlocks, provides high performance in the common case, and greatly simplifies programming. It is receiving great attention in research conferences and is also being incorporated in commercial processors. One of the biggest overheads for such a system is the communication required between cores to implement transactional semantics. This overhead significantly impacts performance and power consumption of future processors.

The project explores algorithms to not only reduce the required amount of communication, but also explores mechanisms to reduce the overheads of communication. The key insight behind the proposed work is that an optimal on-chip network and transactional memory implementation will emerge by closely studying the interaction between the two. The insight developed during this work will lead to better methodologies to compute an optimal on-chip network. The simulators and tools developed during the research efforts will also support projects in graduate and undergraduate courses at the University of Utah.

Project Start
Project End
Budget Start
2008-07-01
Budget End
2012-06-30
Support Year
Fiscal Year
2008
Total Cost
$275,000
Indirect Cost
Name
University of Utah
Department
Type
DUNS #
City
Salt Lake City
State
UT
Country
United States
Zip Code
84112