Machine learning using deep neural networks has recently demonstrated broad empirical success. Despite this success, the optimization procedures that fit deep neural networks to data are still poorly understood. Besides playing a crucial role in fitting deep neural networks to data, optimization also strongly affects the model's ability to generalize from training examples to unseen data. This project will establish a working theory for why and when large artificial neural networks train and generalize well, and use this theory to develop new optimization methods. The utility of the new methods will be demonstrated in applications involving language, speech, biological sequences and other sequence data. The project will involve training of graduate and undergraduate students, and the project leaders will offer tutorials aimed at both the machine learning community, and other researchers and engineers using machine learning tools.

In order to establish a theory of why and when non-convex optimization works well when training deep networks, both empirical top-down and analytic bottom-up approaches will be pursued. The top-down approach will involve phenomenological analysis of large scale deep models used in practice, both when presented with real data, and when presented with data specifically crafted to test the behavior of the network. The bottom-up approach will involve precise analytic investigation from increasingly more complex models, starting with linear models, and non-convex matrix factorization, progressing through linear neural networks, models with a small number of hidden layers, and eventually reaching deeper and more complex networks. The theory developed aims to be both explanatory and actionable, and will be used to derive new optimization methods and modifications to architectures that aid in optimization and generalization. A particularly important testbed is the case of recurrent neural networks. Recurrent neural networks are powerful sequence models that maintain state as they process an input sequence and are used for sequence data. Particularly challenging to optimize, recurrent neural networks still leave much room for a stronger principled understanding, which the project aims to provide.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

Project Start
Project End
Budget Start
2018-08-01
Budget End
2021-07-31
Support Year
Fiscal Year
2017
Total Cost
$328,204
Indirect Cost
Name
University of California Berkeley
Department
Type
DUNS #
City
Berkeley
State
CA
Country
United States
Zip Code
94710