This project develops efficient and effective algorithms to handle challenging problems in visual tracking such as drift, heavy occlusion, and failure recovery. The research team is developing an integrated framework in which object detection, tracking and recognition are addressed simultaneously. Within this framework, the prior knowledge is learned from a large set of images pertaining to object classes of interest. Such knowledge serves as long-term memory for the proposed appearance models which are then adapted to unseen new object instances. In addition, a top-down saliency model for each object class of interest is developed in order to handle heavy occlusion and failure recovery. The project has four major components: developing algorithms for learning visual prior and transferring knowledge for online appearance model, designing tracking algorithms that handle draft with the proposed appearance model, modeling top-down saliency maps to handle full occlusion and tracking failure, evaluating state-of-the-art algorithms with a large benchmark dataset.
This project provides a building block for robust object tracking, which can be applied to motion analysis, surveillance, and multi-object tracking. The developed top-down saliency map provides a flexible way to represent objects, which can be extended to object detection and segmentation. The proposed tracking library and benchmark data set provide a platform for evaluation of advances in object tracking. This research is integrated with education and outreach by courses and activities aimed at attracting students to this field and encouraging interdisciplinary collaborations.