Funding is provided for the new shared memory resource EMBER at UIUC. The new machine will be part of the TeraGrid/XD "family" of machines. This resource is essential to enabling many electronic structure calculations as well as computations in fluid dynamics. In all these cases, a large shared memory system is critical. In addition, the new system will allow larger scale computational experiments in OpenMP and provides an excellent entree for students desiring to learn parallel programming.

Project Report

Ember was a shared memory resource offering access to 16.4 TF of large shared memory SGI UltraViolet systems with a total of 1,536 processor cores and 8 TB of memory. The system had 170 TB of storage in a CxFS filesystem with 13.5 GB/s I/O bandwidth. Ember was configured to run applications with moderate to high levels of parallelism (16-384 processors) requiring the advantages of the shared memory environment. Ember provided a critical resource to continue to support the limited opportunity for advancing research requiring a scalable shared memory architecture otherwise unavailable to the research community. As part of NSF's portfolio of high-end computing resources coordinated by the TeraGrid today and subsequently by the Extreme Sicnce and Engineering Discovery Environment (XSEDE), the system will provided a broad and diverse community of researchers with a unique resource. Ember was specifically targeted to support applications especially computational chemistry, and computational solid and fluid mechanics requiring the large-scale shared memory architecture that is extremely rare amongst institutional and departmental resources and even then only at small scale. Ember emerged as an essential resource for science and engineering research due to its unique combination of large shared memory nodes, balanced I/O performance, and the availability of third party application codes—in particular in the areas of computational chemistry and computational solid and fluid mechanics—that was not found anywhere else in the TeraGrid and subsequent XSEDE portfolio of resources. For computational chemistry, large shared memory nodes that can reliably run long running ab-initio calculations enable material property predictions that are more efficient than the equivalent distributed memory, semi-direct algorithms. Computational solid and fluid mechanics codes use efficient in-core solvers, exploiting the large memory, which enable projects such as the characterization of human bone microstructures pertinent to fracture initiation and arrest. Ember also enabled projects such as the NSF-funded Extreme OpenMP, which is investigating application scalability with the turbulence code Gen-IDLEST as well as scalable OpenMP implementations in experimental compilers. Ember will provided a computing resource to support the development and execution of a number of important applications that push the boundaries of computational science and engineering and are inherently dependent on a large shared memory hardware architecture. Throughout its operational period, Ember supported significant scientific advances in a broad range of areas including planetary astronomy (www.ncsa.illinois.edu/News/Stories/disks/), hydrogen fuel cell design (www.ncsa.illinois.edu/News/Stories/ammonia/), molecular structure (www.ncsa.illinois.edu/News/Stories/recoupled/), and semi-conductor materials design (www.ncsa.illinois.edu/News/Stories/graphene/).

Agency
National Science Foundation (NSF)
Institute
Division of Advanced CyberInfrastructure (ACI)
Application #
1012087
Program Officer
Barry I. Schneider
Project Start
Project End
Budget Start
2010-03-01
Budget End
2012-02-29
Support Year
Fiscal Year
2010
Total Cost
$3,232,158
Indirect Cost
Name
University of Illinois Urbana-Champaign
Department
Type
DUNS #
City
Champaign
State
IL
Country
United States
Zip Code
61820