This project would continue the development of a Bayesian approach to robot vision. Its goal is the recognition, and position estimation, of 3-D indoor and outdoor objects. The method will use as input the images taken by two or more cameras in different positions, or on range data, or on other sensed data. Textured-image analysis is also considered, both as a separate topic, and as it relates to the preceding. The approach has three components: 1. Modeling of objects to the necessary detail, based on a combination of geometric, algebraic, and probabilistic models. 2. Development, for each problem, of the joint likelihood of the measured data, and the a priori unknown model parameters, followed by estimation or recognition using new extensions of classical statistical concepts applied to this joint probability function. This permits optimal estimation of a priori unknown quantities and optimal recognition of objects of interest. 3. Development of new algorithms designed to run on parallel processors, motivated by the need to deal with the large number of parameters involved in these methods. The significance of the work is: 1. It provides a unified framework for computer vision and for handling measured data, a priori model information, and uncertainty. 2. It provides a very good performance functional so that robot vision can be treated as a maximization problem. 3. It permits the derivation of performance bounds (e.g., on covariance matrices for parameter estimation errors), which help in understanding the performance limitations of the new system. More importantly, it facilitates the optimization of intermediate stages. 4. It provides a new set of interesting, potentially powerful algorithms and operators to be studied for parallel processing in computer vision.