This project is concerned with techniques for active selection of training examples for neural network learning, while simultaneously growing the network to fit the data. The approach uses a statistical sampling criterion, Integrated Mean Squared Error, to derive a "greedy" selection criterion which picks the next training example that maximizes the decrement in this measure. This selection criterion is usable for a wide class estimators. A practical realization of this schemes for multi- layer neural networks is demonstrated. //