The investigator will study the application of low-storage, second- order optimization methods for estimation problems arising in neural network modeling. He proposes to make fundamental contributions to the numerical approaches available to researchers in this field and to apply his methods to widely-used neural network models. Building in part on his previous research on neural networks, he proposes to apply, modify, and develop conjugate gradient and low-storage variants of Newton and quasi-Newton methods and to test these methods on applications and standard test problems in the neural network literature. Preliminary numerical results are very encouraging and indicate that these techniques can significantly reduce the numerical burden of learning in large networks. He proposes to adapt the most promising methods (probably low-storage quasi-Newton) to solve estimation problems for biological networks and recurrent networks. He proposes to investigate hybrid strategies in which he combines his approaches with techniques that generate the underlying network during the learning process and to investigate the application of these techniques to recurrent and biological networks. Further, he proposes to continue his previous research on fault-tolerant networks by investigation further applications of this model and developing effective numerical approaches for the resulting large- scale non-linearly constrained optimization problems.

Agency
National Science Foundation (NSF)
Institute
Division of Electrical, Communications and Cyber Systems (ECCS)
Application #
9111548
Program Officer
Paul Werbos
Project Start
Project End
Budget Start
1991-07-15
Budget End
1995-06-30
Support Year
Fiscal Year
1991
Total Cost
$200,534
Indirect Cost
Name
Johns Hopkins University
Department
Type
DUNS #
City
Baltimore
State
MD
Country
United States
Zip Code
21218