Recent advances in deep learning have led to many disruptive technologies: from automatic speech recognition systems, to automated supermarkets, to self-driving cars. However, the complex and large-scale nature of deep networks makes them hard to analyze and, therefore, they are mostly used as black-boxes without formal guarantees on their performance. For example, deep networks provide a self-reported confidence score, but they are frequently inaccurate and uncalibrated, or likely to make large mistakes on rare cases. Moreover, the design of deep networks remains an art and is largely driven by empirical performance on a dataset. As deep learning systems are increasingly employed in our daily lives, it becomes critical to understand if their predictions satisfy certain desired properties. The goal of this NSF-Simons Research Collaboration on the Mathematical and Scientific Foundations of Deep Learning is to develop a mathematical, statistical and computational framework that helps explain the success of current network architectures, understand its pitfalls, and guide the design of novel architectures with guaranteed confidence, robustness, interpretability, optimality, and transferability. This project will train a diverse STEM workforce with data science skills that are essential for the global competitiveness of the US economy by creating new undergraduate and graduate programs in the foundations of data science and organizing a series of collaborative research events, including semester research programs and summer schools on the foundations of deep learning. This project will also impact women and underrepresented minorities by involving undergraduates in the foundations of data science.
Deep networks have led to dramatic improvements in the performance of pattern recognition systems. However, the mathematical reasons for this success remain elusive. For instance, it is not clear why deep networks generalize or transfer to new tasks, or why simple optimization strategies can reach a local or global minimum of the associated non-convex optimization problem. Moreover, there is no principled way of designing the architecture of the network so that it satisfies certain desired properties, such as expressivity, transferability, optimality and robustness. This project brings together a multidisciplinary team of mathematicians, statisticians, theoretical computer scientists, and electrical engineers to develop the mathematical and scientific foundations of deep learning. The project is divided in four main thrusts. The analysis thrust will use principles from approximation theory, information theory, statistical inference, and robust control to analyze properties of deep networks such as expressivity, interpretability, confidence, fairness and robustness. The learning thrust will use principles from dynamical systems, non-convex and stochastic optimization, statistical learning theory, adaptive control, and high-dimensional statistics to design and analyze learning algorithms with guaranteed convergence, optimality and generalization properties. The design thrust will use principles from algebra, geometry, topology, graph theory and optimization to design and learn network architectures that capture algebraic, geometric and graph structures in both the data and the task. The transferability thrust will use principles from multiscale analysis and modeling, reinforcement learning, and Markov decision processes to design and study data representations that are suitable for learning from and transferring to multiple tasks.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.