To effectively exploit the power of multi-core processors, programs must be structured as a collection of independent tasks, where separate tasks execute on independent cores. The complexity of modern software makes it difficult for programmers to express their algorithms within this model, both due to the amount of program analysis needed to identify regions of code that can run in parallel, and the likelihood that different regions of code will be best suited by distinct, and possibly incompatible, models of parallel computing. In particular, some codes are best parallelized through speculative techniques, while others favor regular analysis, such as that provided by the polyhedral approach.
The proposed research addresses fundamental issues in the creation of parallel programs through a novel combination of automatic and profile-driven techniques. The heart of the research is a robust system based on machine learning, through which a compilation tool can analyze a program, assess the suitability of a variety of parallelization techniques to that program, and then apply the most promising techniques automatically. At run-time, the program will also employ learning to adapt its behavior according to inputs and environment. Furthermore, the programmer will be given a profile-driven feedback mechanism, in order to guide the tool to refine its parallelization of the program, and guide the program's self-tuning behavior. In conjunction with the creation of this system, new algorithms and tools for speculative parallelization and large-scale program analysis will be invented. Prototypes and source code will be distributed as open-source software.