This project constructs an Omnipresent Vision system - a computational system that allows us to navigate, share, enhance, and understand the visual data captured by a slew of fixed and moving cameras. The society is flooded with various cameras. Almost every cell phone has a video camera and wearable cameras are starting to permeate our lives. These local cameras capture visual experiences from personal perspectives. Static cameras at various outdoor and indoor locations are also constantly capturing videos. These fixed-view cameras offer global, persistent looks into our daily lives. The key idea of this project is to fully leverage the combination of these local and global cameras to enable new visual experiences and facilitate the understanding of the scene and the people within. This is achieved with novel algorithms and computational tools that bring together the local and global views into an integrated platform, model the dynamic scene by joining those two sets of perspectives, and recognize the actions and events in them.

The research, at a personal level, enables the spatio-temporal and contextual expansion of the person's view, and at a scene level, it enables the interpretation of the scene at various scales of spatial and temporal resolutions. It also provides new means to understand people and scenes. For instance, it facilitates the understanding of people who cannot communicate their intentions. The research activities also furnish graduate and undergraduate students educational opportunities to take part in spawning this new area of research.

Project Start
Project End
Budget Start
2013-09-15
Budget End
2016-08-31
Support Year
Fiscal Year
2013
Total Cost
$184,416
Indirect Cost
Name
Drexel University
Department
Type
DUNS #
City
Philadelphia
State
PA
Country
United States
Zip Code
19102