Deep learning has demonstrated remarkable, high fidelity performance on computer vision and natural language processing tasks that revolutionize manufacturing and social life. Recent applications of deep learning in scientific problems have also advanced scientific discovery via computational chemistry, materials science, medicine, immunology, climate sciences, etc. Understanding the mathematical principles of deep learning algorithms is crucial to validating and improving these algorithms, and will allow scientists and engineers to obtain more reliable predictions and perform a better risk assessment. The research goal is to develop a systematic deep learning analysis serving as the theoretical foundation of numerous scientific problems based on deep learning; cutting-edge algorithms for the efficient solutions of high-dimensional and highly nonlinear partial differential equations arising in various application domains will also be proposed with a theoretical guarantee. The proposed deep learning-based algorithms for high-dimensional and highly nonlinear problems will be expected to greatly advance the state-of-the-art simulations of complex physical systems arising in many fields in science and engineering.

The theoretical challenges of deep learning are largely due to the highly non-linear nature of deep neural networks (DNNs). As a function parametrization tool formulated as compositions of non-linear functions, DNNs are highly non-linear and require advanced mathematics to fully understand. Therefore, there is a critical need for new advances in mathematics for a better understanding of DNNs. The theoretical part of this project mainly focuses on the approximation and generalization capacity of DNNs. The central questions to be answered are whether DNN approximation conquers or lessens the curse of dimensionality, what is the optimal approximation rate of various function classes, and how to characterize the Rademacher complexity of various DNNs trained with state-of-the-art empirical regularization methods aiming at optimal generalization error bound. The computational part of this project concentrates on solving high dimensional and highly oscillatory partial differential equations. The specific approach of this project is to propose hybrid algorithms that combine the advantage of deep learning algorithms and traditional numerical techniques for more efficient computation and higher accuracy. The key idea is to treat deep learning solvers as a preconditioner of traditional numerical algorithms. The algorithms designed in the project will also be implemented in deep learning packages for numerical PDEs and made publicly available. Research outcomes of this project will be disseminated through conferences, publications (journal papers and textbooks), and new mathematical deep learning courses to a broad audience, especially for the next generation of computational scientists.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

Agency
National Science Foundation (NSF)
Institute
Division of Mathematical Sciences (DMS)
Application #
1945029
Program Officer
Yuliya Gorb
Project Start
Project End
Budget Start
2020-07-01
Budget End
2025-06-30
Support Year
Fiscal Year
2019
Total Cost
$139,398
Indirect Cost
Name
Purdue University
Department
Type
DUNS #
City
West Lafayette
State
IN
Country
United States
Zip Code
47907