Transactional Memory (TM) is the union of two transformative ideas: first, that parallel programming will be easier if programmers can simply specify which operations in their code should be atomic, without specifying how to make them atomic; second, that this simplicity can be supported -- and performance often improved -- by a speculative implementation that executes atomic blocks in parallel, and backs out and retries when -- and only when -- those blocks conflict with one another. After many years of research, TM is now entering widespread use. Hardware support is commercially available from both IBM and Intel; software support is standard in Haskell and under consideration in several other programming languages -- notably C++. The sponsored research extends the state of the art in transactional memory by focusing on (1) software acceleration of fast hardware transactions and (2) hardware acceleration of rich software transactions.
The intellectual merits in focus area 1 comprise compiler-based techniques to increase speculation success rates, by safely and automatically moving commonly conflicting operations out of transactions, and by "pipelining" execution to serialize the remaining causes of conflict. The intellectual merits in focus area 2 comprise enhancements to the STM run-time system for the Haskell programming language, where hardware support can be used to accelerate transactions whose semantics are too complex to implement directly with commercial hardware. The broader impacts begin with easier construction of correct, efficient parallel code that will allow programmers of all skill levels to write that code more easily. Moreover, the work will impact computer science and allied fields by smoothing the transition to ubiquitous multithreading, thereby extending performance improvements through the next generation of computing. In summary, the work will lead to progress in almost any domain that is driven by parallel computing, across academia and industry.