This research project is directed to exploring the possibilities of implementing large parallel generic neural network architectures using both silicon devices and optical technology. The underlying principle of the new architectures is to take advantage of the fact that signal processing in silicon ins an advanced and mature technology and to incorporate optics where silicon fails, namely, the interconnectivity problem. These new architectures make possible the construction of fully integrated, alterable networks with 1000 neurons using existing technologies. The next important breakthrough will be to implement a complete learning structure with local memory on a single device. Large, fully parallel architectures that incorporate a simple learning algorithm are delineated and their initial design specified.

Agency
National Science Foundation (NSF)
Institute
Division of Engineering Education and Centers (EEC)
Application #
8811586
Program Officer
name not available
Project Start
Project End
Budget Start
1988-09-01
Budget End
1992-02-29
Support Year
Fiscal Year
1988
Total Cost
$89,400
Indirect Cost
Name
California Institute of Technology
Department
Type
DUNS #
City
Pasadena
State
CA
Country
United States
Zip Code
91125