Most modern computer systems have multiple computational cores that require parallel applications. Unfortunately, developing reliable and efficient parallel applications is notoriously difficult, in part due to nondeterministic scheduling, which can context-switch between parallel threads at any arbitrary program point. The resulting unpredictability and nondeterminism makes it extremely difficult to develop reliable parallel software.
This project proposes to investigate and develop a better model for parallel programming based on extending simple cooperative concurrency to deterministic cooperative parallelism. Cooperative concurrency is a simple non-preemptive model requires minimal synchronization, and under which even buggy or incorrect preemptive programs often execute correctly. This effectively reduces the design complexity. Cooperative concurrency is simple but it does not yield any speedup. This proposal leverages software transactional memory to extract parallelism. The proposal addresses three tightly coupled problems: provide simpler parallel semantics, novel techniques to provide efficient parallel execution, and integrate with existing complementary techniques like automatic parallelization and thread level speculation. Graduate and undergraduate researchers some from under-represented groups will be involved in the research. Overall, the project aims to make a fundamental advanced toward simpler to program parallel applications, with impacts not only in the computer science community but also in other fields like mobile systems.