This project will aim to establish a quantitative framework for integrating multi-sensory visual data in computer vision. Using the methods of variational calculus, the information from multiple visual sources will be embedded to achieve the goal of object surface reconstruction and recognition. In robotic applications spatial reasoning is important. Spatial reasoning means vision, task planning, navigation planning for mobile robots, symbolic reasoning and the integrating of such reasoning with geometric constraints. To the extent spatial inferences must be made from information gathered from multiple sources the ideas developed in this project on integrating information sources are an important adjunct to spatial reasoning. Since the formulation of the problem favors concurrent computing, both computer vision and spatial reasoning will benefit from this research.