This award is for the acquisition, deployment and operation of a high-performance computational system for use by the broad science and engineering research and education community. The system, to be known as the Sun Constellation Cluster, will be deployed at the Texas Advanced Computing Center, located at the University of Texas at Austin. The project represents a collaboration between the University of Texas at Austin, Sun Microsystems, Advanced Micro Devices, the Cornell Theory Center at Cornell University, and the Fulton High Performance Computing Institute at Arizona State University.

The Sun Constellation Cluster will greatly increase the combined capacity of the computational resources of the current NSF-funded, shared-use, high-performance computing facilities and provide a capability that is an order of magnitude larger than the largest supercomputer that NSF currently supports. Because of this, it will advance research and education across a broad range of topical areas in science and engineering that use high-performance computing to advance understanding. With this new resource, researchers will study the properties of minerals at the extreme temperatures and pressures that occur deep within the Earth. They will use it to simulate the development of structure in the early Universe. They will probe the structure of novel phases of matter such as the quark-gluon plasma. Such computing capabilities enable the modeling of life cycles that capture interdependencies across diverse disciplines and multiple scales to create globally competitive manufacturing enterprise systems. The system will permit researchers to examine the way proteins fold and vibrate after they are synthesized inside an organism. Sophisticated numerical simulations will permit scientists and engineers to perform a wide range of in silico experiments that would otherwise be too difficult, too expensive or impossible to perform in the laboratory.

High-performance computing of the sort that will be possible with the new system is also essential to the success of research and education conducted with sophisticated experimental tools. For example, without the waveforms produced by numerical simulations of black hole collisions and other astrophysical events, gravitational wave signals cannot be extracted from the data produced by the Laser Interferometer Gravitational Wave Observatory; high-resolution seismic inversions from the higher density of broadband seismic observations furnished by the EarthScope project are necessary to determine shallow and deep Earth structure; simultaneous integrated computational and experimental testing is conducted on the Network for Earthquake Engineering Simulation to improve seismic design of buildings and bridges; and advanced computing capabilities will be essential to extracting the signature of the Higgs boson and supersymmetric particles, two of the scientific drivers of the Large Hadron Collider, from the petabytes of data produced in the trillions of particle collisions.

This project presents an exciting opportunity to advance the type of research described above by: (i) greatly extending the capacity of high-performance computational resources available to the science and engineering communities, (ii) extending the range of advanced computations that can be handled by providing a system with a very large amount of memory, and a very large amount of processing capability. This system will use an architecture that is similar to that present in many academic institutions and to which many science and engineering applications have been ported. In addition, the system represents an important stepping-stone towards the goal of the use of petascale computing in science and engineering research and education at the end of the decade. It will provide a platform that will allow researchers to experiment with techniques for overcoming one of the hurdles in the path to petascale computing, scaling to very large numbers of processors. This computing system will also provide opportunities to many graduate students and post-docs to gain experience in using high-performance computing systems

The Texas Advanced Computing Center and its partners will broaden the impact of the computing resource by: teaching in situ and online classes for undergraduate and graduate students in high-performance computing, visualization, data analysis, and grid computing for computational research in science and engineering; partnering with faculty and students at a number of Minority Serving Institutions to provide training in the use of high-performance computing resources; and collaborating with the Girlstart program, a program that supports and enhances the interest of girls in math, science, and technology.

Project Report

This project funded the groundbreaking Ranger supercomputer. In February of 2008, Ranger debuted as the most powerful system for open science in the world, with 62,976 processor cores and a peak performance of 579 trillion floating point operations per second. Ranger, the original NSF Track2 project winner at the Texas Advanced Computing Center at the University of Texas at Austin (TACC) provided the open science user community access to enormous computing resources, and was a game-changer in both the scale and the price performance of systems offered to the community. Throughout it’s entire lifespan, Ranger remained in extremely high demand by the community. By all objective measures, the Ranger project was a remarkable success. This success was acknowledged by the NSF during 2011 with a one year extension to the production life of the project, extending the production life of Ranger from February 2012 to February 2013. By the end of its lifetime, Ranger had delivered more than two and a half million successful jobs to the user community, supporting more than a thousand peer-reviewed funded research projects across all disciplines of engineering and science. Ranger exceeded virtually every operational metric projected in the original proposal. Uptime significantly exceeded the 95% threshold set by the solicitation. Over the 5 year operational life of the system, over 2.1 billion service units (hours of time on each processor core) were delivered to the national community. More than five thousand scientists and engineers used the system during its lifetime. Throughout the operational period, Ranger was in high demand, with the community making requests for more than four times as much system time on Ranger as could be provided. In addition to the system itself, TACC staff provided user support, training, system administration, and network administration to ensure continuous availability and proper utilization of these resources. TACC provided 24x7 on-site coverage, with more than 40 staff members worked on the Ranger project, and responding to more than 4,000 Ranger support tickets from the community. Training, education, and outreach activities reached thousands of participants, including 500 participants a year participating in live training and 1,500 viewing online training each year. More than 150 students enrolled in semester long academic courses in scientific computing taught by TACC staff each year of the project. Approximately 10,000 additional people participated in various tours or outreach activities. The technology evaluation activity yielded dividends through the project which improved the performance of user applications, and has since been translated to subsequent products, including the follow on system Stampede. The individual scientific impacts of Ranger on projects are too numerous to mention, but we would in particular highlight the impacts of Ranger in computational models to respond to disaster. Ranger was used to do modeling of the 2011 Japanese Earthquake, to model the impacts of the BP oil spill in the Gulf, to design vaccines in response to the Swine Flu threat, to improve tornado prediction models, and was used extensively in hurricane modeling for numerous storms and users, including NOAA. This last category led to the most high profile impact of Ranger, when it was cited in the senate for its impacts. Senator Kay Bailey Hutchinson stated "Of course, I must also note the critical testimony we will hear from Dr. Gordon Wells of the Center for Space Research at the University of Texas. Dr. Wells will testify about his experience using the "Ranger" – the most powerful computer in the National Science Foundation’s network of academic high performance computers – to synthesize satellite imagery, GPS tracking signals, and hurricane and storm surge models to orchestrate evacuations during Hurricane Ike. Dr. Wells’ use of "Ranger" helped saved thousands of lives and we need to ensure that our scientists and emergency planners and responders have the best tools possible to help protect both life and property." As the project drew to a close, the hardware purchased through the Ranger project has found new life. The original compute racks have been distributed around Texas and around the world. Some of the system will continue in its science mission; supporting work at UT’s Applied Research Lab, at Texas A&M, and at Baylor College of Medicine. Some has been repurposed strictly for education on clusters, at places like Texas State Technical College. Nearly half the system has been sent abroad to support the development of new computational scientists and HPC professionals in Africa. Racks have been sent to South Africa, Tanzania, and Botswana, where they are being reconditioned and divided into mini-clusters that will serve more than a dozen universities that have no current resources available to students. We believe that Ranger was the most successful and highest impact computing system funded by the National Science Foundation to date, and an exemplar of what large-scale funding in HPC can accomplish.

Agency
National Science Foundation (NSF)
Institute
Division of Advanced CyberInfrastructure (ACI)
Type
Cooperative Agreement (Coop)
Application #
0622780
Program Officer
Barry I. Schneider
Project Start
Project End
Budget Start
2006-10-01
Budget End
2013-09-30
Support Year
Fiscal Year
2006
Total Cost
$64,733,304
Indirect Cost
Name
University of Texas Austin
Department
Type
DUNS #
City
Austin
State
TX
Country
United States
Zip Code
78712