9318163 Bielak The Grand Challenge Application Groups competition provides one mechanism for the support of multidisciplinary teams of scientists and engineers to meet the goals of the High Performance Computing and Communications (HPCC) Initiative in Fiscal Year 1993. The ideal proposal provided not only the opportunity to achieve significant progress on (1) a fundamental problem in science or engineering whose solution could be advanced by applying high performance computing techniques and resources, or (2) enabling technologies which facilitate those advances. but also significant interactions between scientific and computational activities, usually involving mathematical, computer or computational scientist, that would have impact in high performance computational activities beyond the specific scientific or engineering problem area(s) or discipline being studied. The main objective of the proposed research is to develop and demonstrate the capability for predicting, by computer simulation the ground motion of large basins during strong earthquakes, and to use this capability to study the seismic response of the Greater Los Angeles Basin. The proposed research seeks to: 1. Develop three-dimensional models of large-scale, heterogeneous basins that take into account earthquake source, propagation path, and site conditions; 2. Develop nonlinear models for sedimentary basins that experience sufficiently strong ground motion; 3. Develop unstructured mesh methods and associated fast parallel solvers, enabling the study of much larger basins; 4. Develop software tools for the automatic mapping of the computations associated with large unstructured mesh problems on parallel computers; 5. Characterize the computation and communication requirements of unstructured mesh problems, and make a set of recommendations for the design of future parallel systems. While the proposed work is motivated by an interest in gaining a better understanding of strong se ismic motion in large basins, the algorithms and software tools developed will be applicable to a wide range of applications that require unstructured meshes. This award is being supported by the Advanced Projects Research Agency as well as NSF programs in engineering, atmospheric and computer sciences. 9318183 Davis The Grand Challenge Application Groups competition provides one mechanism for the support of multidisciplinary teams of scientists and engineers to meet the goals of the High Performance Computing and Communications (HPCC) Initiative in Fiscal Year 1993. The ideal proposal provided not only the opportunity to achieve significant progress on (1) a fundamental problem in science or engineering whose solution could be advanced by applying high performance computing techniques and resources, or (2) enabling technologies which facilitate those advances but also significant interactions between scientific and computational activities, usually involving mathematical, computer or computational scientist, that would have impact in high performance computational activities beyond the specific scientific or engineering problem area(s) or discipline being studied. The investigators will study the application of high performance parallel computing to a class of scientifically important and computationally demanding problems in remote sensing-land cover dynamics problems including generating improved fine spatial resolution data for the global carbon cycle, hydrological modeling and global ecological responses to climate changes and human activity. The research is collaborative, including scientist from the University of Maryland, University of Indiana, University of news Hampshire and NASA s Goddard Space Center. The award will combine research on -new analysis procedures for remotely sensed data -the integration of multispectral, multiresolution and multitemporal image data sets into a unified global data structure based on hie rarchical data structures (i.e., quadtrees) -the utilization of these hierarchical, parallel data structures for the representation of spatial data (maps and products developed from image analysis) and the development of a spatial data base system with a sophisticated query language that scientist can use to control the application of biophysical models to global data sets -run-time support for constructing scalable and parallel solutions to problems involving the manipulation of irregular data structures such as quadtrees -parallel I/O,especially novel methods for mapping large arrays and quadtrees onto parallel disks and disk systems, and for accessing them using low overhead bulk transfers The development work will be conducted on a 32 processor Connection Machine CM5, installed at the University of Maryland, and on an IBM SP1 which we propose to obtain as part of the program. This award is being supported by the Advanced Projects Research Agency as well as NSF programs in geological, biological, and computer sciences. 9318145 Messina The Grand Challenge Application Groups competition provides one mechanism for the support of multidisciplinary teams of scientists and engineers to meet the goals of the High Performance Computing and Communications (HPCC) Initiative in Fiscal Year 1993. The ideal proposal provided not only the opportunity to achieve significant progress on (1) a fundamental problem in science or engineering whose solution could be advanced by applying high performance computing techniques and resources, or (2) enabling technologies which facilitate those advances. but also significant interactions between scientific and computational activities, usually involving mathematical, computer or computational scientist, that would have impact in high performance computational activities beyond the specific scientific or engineering problem area(s) or discipline being studied. This multi-disciplinary project will investigate and develop strategies for efficient implementation of I/O intensive applications in computational science and engineering. Scalable parallel I/O approaches will be pursued by a team of computer scientists and applications scientists who will work together to: * Characterize the I/O behavior of specific application programs running on large massively parallelcomputers * Abstract and define I/O models (templates) * Define application-level methodologies for efficient parallel I/O * Implement and test application level I/O tools on large-scale computers The Pablo performance analysis environment will provide the foundation for the performance instrumentation and analysis. The application programs are already fully operational on advanced architecture systems and their authors are all co-investigators in this project. The principal computers used will be the Intel Touchstone Delta and Paragon systems at Caltech, each with over 500 computational nodes. Five application areas will be included: fluid dynamics, chemistry, astronomy, neuroscience, and modelling of materials-processing plasmas. The parallel programs for these applications cover a range of I/O patterns and volume, and the techniques that will be developed in this project will be of relevance to a broad spectrum of engineering and science applications. In addition, by overcoming their current I/O limitations, the specific applications targeted in this award will achieve significant new science and engineering results. By the end of the project, sustained teraFLOPS computers will become available. The project will devise and implement general methods for scalable I/O using today's advanced computers, immediately apply those methods to carry out unprecedented applications in several fields, and use the

Agency
National Science Foundation (NSF)
Institute
Division of Physics (PHY)
Application #
9318152
Program Officer
Barry I. Schneider
Project Start
Project End
Budget Start
1993-09-15
Budget End
1999-08-31
Support Year
Fiscal Year
1993
Total Cost
$3,780,000
Indirect Cost
Name
University of Texas Austin
Department
Type
DUNS #
City
Austin
State
TX
Country
United States
Zip Code
78712