Exascale High Throughput Computing (HTC) demonstrates burst capability to advance multi-messenger astrophysics (MMA) with the IceCube detector. At peak, the equivalent of 1.2 Exaflops of 32 bit floating-point compute power is used. This is equivalent to approximately 3 times the scale of the #1 in the Top500 Supercomputer listing as of June 2019. In one hour, roughly 125 terabytes of input data is used to produce 250 terabytes of simulated data that is stored at the University of Wisconsin, Madison to be used to advance IceCube science. This data amounts to about 5% of the annual simulation data produced by the IceCube collaboration in 2018.

This demonstration tests and evaluates the ability of HTC-focused applications to effectively utilize availability bursts of Exascale-class resources to produce scientifically valuable output and explores the first 32-bit floating point Exaflop science application. The application concerns IceCube simulations of photon propagation through the ice at its South Pole detector. IceCube is the pre-eminent neutrino experiment for the detection of cosmic neutrinos, and thus an essential part of the MMA program listed among the NSF's 10 Big Ideas.

The simulation capacity of the IceCube collaboration is significantly enhanced by efficiently harnessing the power of short-notice Exascale processing capacity at leadership class High Performance Computing systems and commercial clouds. Investigating these capabilities is important to facilitate time-critical follow-up studies in MMA, as well as increasing the overall annual capacity in aggregate by exploiting opportunities for short bursts.

The demonstration is powered primarily by Amazon Web Services (AWS) and takes place in the Fall of 2019 during the International Conference for High Performance Computing, Networking, Storage, and Analysis (SC19) in Denver, Colorado. It is a collaboration between the IceCube Maintenance & Operations program and a diverse set of Cyberinfrastructure projects, including the Pacific Research Platform, the Open Science Grid, and HTCondor. By further collaborating with Internet2 and AWS, the experimental project also explores more generally, large high-bandwidth data flows in and out of AWS. The outcomes of this project will thus have broad applicability across a wide range of domains sciences, and scales, ranging from small colleges to national and international scale facilities.

This project is supported by the Office of Advanced Cyberinfrastructure in the Directorate for Computer & Information Science & Engineering and the Division of Physics in the Directorate of Mathematical and Physical Sciences.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

Agency
National Science Foundation (NSF)
Institute
Division of Advanced CyberInfrastructure (ACI)
Type
Standard Grant (Standard)
Application #
1941481
Program Officer
Kevin Thompson
Project Start
Project End
Budget Start
2019-10-01
Budget End
2021-09-30
Support Year
Fiscal Year
2019
Total Cost
$295,000
Indirect Cost
Name
University of California San Diego
Department
Type
DUNS #
City
La Jolla
State
CA
Country
United States
Zip Code
92093