This project is concerned with developing a theoretical foundation for learning in neural networks and artificial intelligence in the direction of statistical decision theory and analysis of algorithms. The proposed work includes several aspects. First, further extensions and applications of the statistical techniques of Vapnik and Chervonenkis are pursued. Second, a coherent Bayesian metholody is developed for learning that includes the Valnik-Chervonenkis theory and the recent statistical physics approached to learning. Third, weighted majority and other on-line learning strategies are developed, including extensions to nonstationary learning environments. Finally, analysis and application of unsupervised learning and feature discovery techniques are performed.