The U.S. Government invests in leadership supercomputing facilities through several agencies to advance scientific discovery in many fronts. This project is motivated by this national commitment to supercomputing research and the increasing availability of many-core computing hardware from workstations to supercomputers. Today scientists and engineers have access to extreme-scale computing resources. However, many legacy codes do not take advantage of recent innovations in computing hardware, and there is a lack of open-source simulation science software that can effectively leverage the many-core computing paradigm. Computational fluid dynamics (CFD) solvers have advanced many fields such as aerospace engineering and atmospheric sciences. Many current open-source CFD models and numerical weather prediction models do not take full advantage of the superior compute performance of graphics processing units (GPUs). By creating an open-source community model that can execute on multi-GPU workstations and large GPU clusters, the project team expects to broaden the use of high-performance computing in fluid dynamics applications. The immediate target application is wind modeling over complex terrain, to support research and development in wind resource assessment, power forecasting, atmospheric research, and air pollution. Through this project, the PIs will continue to transfer and expand the knowledge bases in GPU computing, computational mathematics, and software engineering to new students. Skill sets that transcend traditional disciplines are highly prized by national laboratories as there is a critical shortage of workforce who can conduct scientific research using supercomputers. Students and postdoctoral researchers who are involved in this project will contribute toward this critical workforce.

This project brings together engineers, applied mathematicians, and computer scientists. The entire suite of software elements will be designed for GPU clusters with an MPI-CUDA implementation that overlaps computation with communications using a three-dimensional decomposition for enhanced scalability. The implementation will balance performance and further development and ownership by a broader community of academic researchers. The team will follow modern software engineering practices for concurrent applications. An adaptive mesh refinement strategy that can scale on GPU clusters will be developed. A novel projection method based on radial basis functions will impose the divergence-free constraint on a hierarchy of adaptively refined grids. Software elements will be tested using unit testing and verification techniques for concurrent programs, and against data available from benchmark numerical problems. The flow solver will include modules for the immersed boundary approach for arbitrarily complex terrain and the dynamic large-eddy simulation technique. The software implementation and syntax will be intuitive to allow contributions from a larger community. The project team expects the proposed software to help reduce modeling errors with very high resolution simulations and contribute toward a fundamental understanding of turbulent winds over complex terrain. The PIs of this project will continue their teaching efforts in Parallel Scientific Computing, Computational Mathematics, and Software Engineering. The results will be disseminated through conference presentations and via a wiki site for the open-source project. Software elements will be released under an open-source GNU General Public License.

National Science Foundation (NSF)
Division of Advanced CyberInfrastructure (ACI)
Standard Grant (Standard)
Application #
Program Officer
Rajiv Ramnath
Project Start
Project End
Budget Start
Budget End
Support Year
Fiscal Year
Total Cost
Indirect Cost
Boise State University
United States
Zip Code