Message-Passing Interface (MPI) continues to dominate the supercomputing landscape, being the primary parallel programming model of choice. A large variety of scientific applications in use today are based on MPI. On the current and next-generation High-End Computing (HEC) systems, it is essential to understand the interaction between time-critical applications and the underlying MPI implementations in order to better optimize them for both scalability and performance. Current users of HEC systems develop their applications with high-performance MPI implementations, but analyze and fine tune the behavior using standalone performance tools. Essentially, each software component views the other as a blackbox, with little sharing of information or access to capabilities that might be useful in optimization strategies. Lack of a standardized interface that allows interaction between the profiling tool and the MPI library has been a big impediment. The newly introduced MPI_T interface in the MPI-3 standard provides a simple mechanism that allows MPI implementers to expose variables representing configuration parameters or performance measurements from within the implementation for the benefit of tools, tuning frameworks, and other support libraries. However, few performance analysis and tuning tools take advantage of the MPI_T interface and none do so to dynamically optimize at execution time. This research and development effort aims to build a software infrastructure for MPI performance engineering using the new MPI_T interface.

With the adoption of MPI_T in the MPI standard, it is now possible to take positive steps to realize close interaction between and integration of MPI libraries and performance tools. This research, undertaken by a team of computer scientists from OSU and UO representing the open source MVAPICH and TAU projects, aims to create an open source integrated software infrastructure built on the MPI_T interface which defines the API for interaction and information interchange to enable fine grained performance optimizations for HPC applications. The challenges addressed by the project include: 1) enhancing existing support for MPI_T in MVAPICH to expose a richer set of performance and control variables; 2) redesigning TAU to take advantage of the new MPI_T variables exposed by MVAPICH; 3) extending and enhancing TAU and MVAPICH with the ability to generate recommendations and performance engineering reports; 4) proposing fundamental design changes to make MPI libraries like MVAPICH ``reconfigurable'' at runtime; and 5) adding support to MVAPICH and TAU for interactive performance engineering sessions. The framework will be validated on a variety of HPC benchmarks and applications. The integrated middleware and tools will be made publicly available to the community. The research will have a significant impact on enabling optimizations of HPC applications that have previously been difficult to provide. As a result, it will contribute to deriving "best practice" guidelines for running on next-generation Multi-Petaflop and Exascale systems. The research directions and their solutions will be used in the curriculum of the PIs to train undergraduate and graduate students.

Agency
National Science Foundation (NSF)
Institute
Division of Advanced CyberInfrastructure (ACI)
Type
Standard Grant (Standard)
Application #
1450440
Program Officer
Stefan Robila
Project Start
Project End
Budget Start
2015-09-01
Budget End
2020-08-31
Support Year
Fiscal Year
2014
Total Cost
$1,200,000
Indirect Cost
Name
Ohio State University
Department
Type
DUNS #
City
Columbus
State
OH
Country
United States
Zip Code
43210