This project is concerned with learning algorithms and architectures for artificial neural networks. The overall goal is improve the learning speed, scalability, generalization power, robustness, and ease of use of these learning algorithms, and to extend them to cover new kinds of learning tasks. This work builds upon the PI's earlier work in this area, which has produced the Quickprop, Cascade-Correlation, and Recurrent Cascade-Correlation algorithms. Cascade-Correlation builds its own network topology in the course of learning and is much faster than standard back-propagation. The current project aims to extend these algorithms to cover a number of new situations: ``online'' learning from a non-repeating stream of training examples, recognition of unclocked, time-continuous signals, and a new form of unsupervised learning.