Neural networks have gained much publicity in the scientific and popular press as a new method of computation that is superior to standard techniques. Early successes with small problem instances have generated optimistic conjectures that neural network solutions to computationally difficult tasks will scale to real-world problems. The project focuses on the development and application of techniques in computational complexity, in order to bring more rigor into the study of the scalability of neural networks to large problems. In addition, this research focuses on quantifying both their computational abilities and their capacity to learn from experience. One objective is the formal analysis of neural networks whose performance has hitherto only been guessed at from experimental evidence on small or randomly generated problems. Specific subjects to be investigated include: (1) upper and lower bounds on the weights of linear threshold functions, (2) the complexity of popular Hopfield network algorithms, (3) the complexity of the loading problem for analog neural networks with a fixed number of nodes, (4) mistake bounds for variants of the perception learning algorithm.