Dynamic vision is uniquely positioned to enhance the quality of life for large segments of the population in a cost effective way. Scene analysis capabilities can prevent crime, allow elderly people to continue living independently and monitor and coordinate responses to environmental threats to minimize their effect. However, a critical factor limiting widespread use of vision techniques is their potential fragility. This project aims precisely at removing this limitation.
The research team is developing a systematic approach to robust dynamic vision that addresses several key sub-problems - tracking, appearance modeling, structure from motion, and motion-based segmentation -- in a common framework. Its conceptual backbone is a unified, operator--theoretic approach stressing the use of dynamic models to address robustness and computational complexity issues. Advantages of the proposed framework include the abilities to:(a) Recast a wide range of problems into a convex optimization form amenable to real time implementations. (b) Furnish worst--case bounds and guaranteed performance certificates that help reducing the on--line computational burden when solving these problems. (c)Exploit camera cooperation to optimize performance.(d) Take advantage of additional information available about the target to improve robustness and falsify the current models when no longer valid, for instance due to data obsolescence.
Education is proactively integrated into this project by using computer vision to convey ideas on robustness and computation complexity in undergraduate and graduate courses. Results of this research effort, including video clips with demos are regularly posted at the Robust Systems Lab (http://robustsystems.ece.neu.edu) website.)