This research project is directed to exploring the possibilities of implementing large parallel generic neural network architectures using both silicon devices and optical technology. The underlying principle of the new architectures is to take advantage of the fact that signal processing in silicon ins an advanced and mature technology and to incorporate optics where silicon fails, namely, the interconnectivity problem. These new architectures make possible the construction of fully integrated, alterable networks with 1000 neurons using existing technologies. The next important breakthrough will be to implement a complete learning structure with local memory on a single device. Large, fully parallel architectures that incorporate a simple learning algorithm are delineated and their initial design specified.