This project will investigate the benefits of applying interior point techniques to learning algorithms in neural networks. The P.I. will attempt to show that by using the tools of interior point methods, a neural network algorithm such as back propagation can be improved both in learning time and quality of solution by optimizing the given methods of learning. As an introductory study, this proposal will investigate the effects on learning in BP by studying a method of analytical centers (Huard 1967, Sonnevend 1985, Renegar 1989). A similar approach was developed and investigated by Trafalis (1989), Abhyankar, Morin and Trafalis (1990) for multiobjective optimization problems. The proposed research would be conducted according to a three-part plan: (1) To consider piecewise linear convex activation functions. The findings of the research will then be generalized to cover more general activation functions (e.g. sigmoid functions); (2) To design, implement and computationally test the learning algorithms which were developed in phase one of the research; (3) To test the developed learning laws in vision problems related to medical applications in cancer diagnosis.

Agency
National Science Foundation (NSF)
Institute
Division of Electrical, Communications and Cyber Systems (ECCS)
Application #
9212003
Program Officer
Paul Werbos
Project Start
Project End
Budget Start
1992-08-01
Budget End
1996-07-31
Support Year
Fiscal Year
1992
Total Cost
$89,064
Indirect Cost
Name
University of Oklahoma
Department
Type
DUNS #
City
Norman
State
OK
Country
United States
Zip Code
73019