A large body of existing, sequential applications must be migrated to multicore platforms with maximum transparency. To achieve this, techniques for automatic parallelization must be pushed to their limits, and current shared memory programming paradigms must be reconsidered with respect to their ease of use and ability to support a wide range of programming needs. The potential for subtle, hard-to-find bugs such as data race conditions needs to be minimized and strategies devised to ensure that remaining errors are caught at run time.
In this project, the investigator builds on top of existing compiler technology for automatic parallelization and OpenMP translation in order to facilitate application development for multicore systems. To do so, a novel translation strategy is developed for this hybrid sequential/directive-based programming model that takes characteristics of multicore platforms into account as well as dealing with power conservation constraints. The approach minimizes the likelihood of erroneous code and relies on the compilation system to reduce synchronization, improve the load balance and otherwise adapt the program to the system. The investigator develops strategies to gather both coarse-grain and fine-grain performance information for selected program regions with low overheads. The project also creates techniques to exploit this information dynamically, and to adapt and optimize based on different levels of program representation. The ideas will be implemented and deployed via an infrastructure that combines a robust, open source compiler, its run time system and the corresponding dynamic compiler into a cohesive environment to support the translation and execution of sequential and directive-based programs.