This research focuses on studying a highly parallel distributed processing system based on neural networks with and without simulated annealing. The neural network under investigation takes advantage of the hill-climbing capability of Cauchy machines with decentralized control and the fast convergence property of Hopfield networks. A reformulation of an energy cost function and appropriate weights of the neural network are crucial to reducing local minima. Many neural networks used in solving NP-complete problems apply a gradient descent algorithm. Although convergence is fast, it is very easy for the system to be trapped in a local minimum. Neural networks employing simulated annealing give better results but converge very slowly. The proposed Gaussian machine, which is composed of sigmoid neurons and stochastic synaptic links, takes advantage of the fast convergence property of the Hopfield network and the hill-climbing property of the Boltzmann machine. Such a study will further our knowledge in mapping neural networks to VLSI and result in more efficient learning algorithms. Support is, therefore, highly recommended.

Project Start
Project End
Budget Start
1989-06-01
Budget End
1991-11-30
Support Year
Fiscal Year
1989
Total Cost
$63,744
Indirect Cost
Name
Case Western Reserve University
Department
Type
DUNS #
City
Cleveland
State
OH
Country
United States
Zip Code
44106