This research focuses on studying a highly parallel distributed processing system based on neural networks with and without simulated annealing. The neural network under investigation takes advantage of the hill-climbing capability of Cauchy machines with decentralized control and the fast convergence property of Hopfield networks. A reformulation of an energy cost function and appropriate weights of the neural network are crucial to reducing local minima. Many neural networks used in solving NP-complete problems apply a gradient descent algorithm. Although convergence is fast, it is very easy for the system to be trapped in a local minimum. Neural networks employing simulated annealing give better results but converge very slowly. The proposed Gaussian machine, which is composed of sigmoid neurons and stochastic synaptic links, takes advantage of the fast convergence property of the Hopfield network and the hill-climbing property of the Boltzmann machine. Such a study will further our knowledge in mapping neural networks to VLSI and result in more efficient learning algorithms. Support is, therefore, highly recommended.