Today, two trends conspire to slow down the pace of science, engineering, and academic research progress in general. First, researchers increasingly rely on computation to process ever larger data sets and to perform ever more computationally-intensive simulations. Second, individual processor speeds are no longer increasing with every computer chip generation as they once were. To compensate, processor manufacturers have moved to including more processors, or cores, on a chip with each generation. To obtain peak performance on these multicore chips, software must be implemented so that it can execute in parallel and thereby use the additional processor cores. Unfortunately, writing efficient, explicitly parallel software programs using today's software-development tools takes advanced training in computer science, and even with such training, the task remains extremely difficult, error-prone, and time consuming. This project will create a new high-level programming platform, called Implicit Parallel Programming (IPP), designed to bring the performance promises of modern multicore machines to scientists and engineers without the costs associated with having to teach these users how to write explicitly parallel programs. In the short term, this research will provide direct and immediate benefit to researchers in several areas of science as the PIs will pair computer science graduate students with non-computer science graduate students to study, analyze, and develop high-value scientific applications. In the long term, this research has the potential to fundamentally change the way scientists obtain performance from parallel machines, improve their productivity, and accelerate the overall pace of science. This work will also have major educational impact by developing courseware and tutorial materials, useable by all scientists and engineers, on the topics of explicit and implicit parallel computing.

IPP will operate by allowing users to write ordinary sequential programs and then to augment them with logical specifications that expand (or abstract) the set of sequential program behaviors. This capacity for abstraction will provide parallelizing compilers with the flexibility to more aggressively optimize programs than would otherwise be possible. In fact, it will enable effective parallelization techniques where they were impossible before. The language design and compiler implementation will be accompanied by formal semantic analysis that will be used to judge the correctness of compiler transformations, provide a foundation for about reasoning programs, and guide the creation of static analysis and program defect detection algorithms. Moreover since existing programs and languages can be viewed as (degenerately) implicitly parallel, decades of investment in human expertise, languages, compilers, methods, tools, and applications is preserved. In particular, it will be possible to upgrade old legacy programs or libraries from slow sequential versions without overhauling the entire system architecture, but merely by adding a few auxiliary specifications. Compiler technology will help guide scientists and engineers through this process, further simplifying the task. Conceptually, IPP restores an important layer of abstraction, freeing programmers to write high-level code, designed to be easy to understand, rather than low-level code, architected according to the specific demands of a particular parallel machine.

Agency
National Science Foundation (NSF)
Institute
Division of Advanced CyberInfrastructure (ACI)
Type
Standard Grant (Standard)
Application #
1047879
Program Officer
Sol Greenspan
Project Start
Project End
Budget Start
2010-10-01
Budget End
2016-03-31
Support Year
Fiscal Year
2010
Total Cost
$1,740,214
Indirect Cost
Name
Princeton University
Department
Type
DUNS #
City
Princeton
State
NJ
Country
United States
Zip Code
08544