The Message Passing Interface (MPI) is a very widely used parallel programming model on modern High-End Computing (HEC) systems. Many performance aspects of MPI libraries, such as latency, bandwidth, scalability, memory footprint, cache pollution, overlap of computation and communication etc. are highly dependent on system configuration and application requirements. Additionally, modern clusters are changing rapidly with the growth of multi-core processors and commodity networking technologies such as InfiniBand and 10GigE/iWARP. They are becoming diverse and heterogeneous with varying number of processor cores, processor speed, memory speed, multi-generation network adapters/switches, I/O interface technologies, and accelerators (GPGPUs), etc. Typically, any MPI library deals with the above kind of diversity in platforms and sensitivity of applications by employing various runtime parameters. These parameters are tuned during its release, or by system administrators, or by end-users. These default parameters may or may not be optimal for all system configurations and applications.

The MPI library of a typical proprietary system goes through heavy performance tuning for a range of applications. Since commodity clusters provide greater flexibility in their configurations (processor, memory and network), it is very hard to achieve optimal tuning using released version of any MPI library, with its default settings. This leads to the following broad challenge: "Can a comprehensive performance tuning framework be designed for MPI library so that the next generation InfiniBand, 10GigE/iWARP and RoCE clusters and applications will be able to extract `bare-metal' performance and maximum scalability?" The investigators, involving computer scientists from The Ohio State University (OSU) and Ohio Supercomputer Center (OSC) as well as computational scientists from the Texas Advanced Computing Center (TACC) and San Diego Supercomputer Center (SDSC), University of California San Diego (UCSD), will be addressing the above challenge with innovative solutions.

The investigators will specifically address the following challenges: 1) Can a set of static tools be designed to optimize performance of an MPI library during installation time? 2) Can a set of dynamic tools with low overhead be designed to optimize performance on a per-user and per-application basis during production runs? 3) How to incorporate the proposed performance tuning framework with the upcoming MPIT interface? 4) How to configure MPI libraries on a given system to deliver different optimizations to a set of driving applications? and 5) What kind of benefits (in terms of performance, scalability, memory efficiency and reduction in cache pollution) can be achieved by the proposed tuning framework? The research will be driven by a set of applications from established NSF computational science researchers running large scale simulations on the TACC Ranger and other systems at OSC, SDSC and OSU. The proposed designs will be integrated into the open-source MVAPICH2 library.

Agency
National Science Foundation (NSF)
Institute
Division of Advanced CyberInfrastructure (ACI)
Type
Standard Grant (Standard)
Application #
1147926
Program Officer
Rajiv Ramnath
Project Start
Project End
Budget Start
2012-06-01
Budget End
2016-05-31
Support Year
Fiscal Year
2011
Total Cost
$450,772
Indirect Cost
Name
University of California San Diego
Department
Type
DUNS #
City
La Jolla
State
CA
Country
United States
Zip Code
92093