Smart camera networks (SCNs) merge computer vision, distributed processing, and sensor network disciplines to solve problems in multi-camera applications by providing valuable information through distributed sensing and collaborative in-network processing. Collaboration in sensor networks is necessary not only to compensate for the processing, sensing, energy, and bandwidth limitations of each sensor node but also to improve the accuracy and robustness of the network. Collaborative processing in SCNs is more challenging than in conventional scalar sensor networks (SSNs) because of three unique features of cameras, including the extremely higher data rate, the directional sensing characteristics with limited field of view (FOV), and the existence of visual occlusion. An integrated research is carried out to tackle the unique challenges presented by SCNs where collaboration is the key. Three aspects of collaborative processing are investigated, 1) coverage estimation in the presence of visual occlusions to provide adequate redundancy in sensing coverage, and to enable collaboration where the statistics of visual coverage blends the statistics of camera nodes and targets, 2) clustering to schedule an efficient sleep-wakeup pattern among neighbor nodes formed by image comparison-based semantic neighbor selection algorithm for more efficient collaboration, and 3) distributed optimization, for in-network data processing that concerns how to effectively obtain robust and accurate integration results from multiple distributed sensors for challenging vision tasks like target detection, localization, and tracking in crowds.

Project Report

Although vision is perhaps the most powerful of the human senses, conventional scalar sensor networks have not been able to exploit its potential due to the extremely constrained resources aside in the network. Smart camera networks (SCNs) merge computer vision, distributed processing, and sensor network disciplines to solve problems in multi-camera applications by providing valuable information through distributed sensing and collaborative in-network processing. Collaboration in sensor networks is necessary not only to compensate for the processing, sensing, energy, and bandwidth limitations of each sensor node but also to improve the accuracy and robustness of the network. Generally speaking, cameras, as a more complex sensing modality, possess three unique features that could hinder the practical deployment of any SCN applications, including the extremely higher data rate, the directional sensing characteristics with limited field of view (FOV), and the existence of visual occlusion. These unique features have brought up new challenging issues, including, for example, the challenges on bandwidth requirement in distributed computing, the challenges on designing light-weight algorithms for improved energy efficiency, and the challenges on fault-tolerance and collaborative processing. This project conducts comprehensive studies on the capabilities and limitations of smart camera networks. We have performed some groundbreaking works that help advance the development of SCNs to a great extent. The project studies the essential issue of visual coverage. Because of the presence of visual occlusions, the statistics of visual coverage blends the statistics of camera nodes and targets, and are extremely difficult to derive. For the first time, we are able to derive a closed-form solution to visual coverage estimation (i.e., estimate the probability that an arbitrary target in the field is visually covered by at least K sensor nodes). With the estimated coverage statistics, we can provide a more accurate estimation of the minimum node density that suffices to ensure a K-coverage across the field. In addition, we were able to provide theoretical bounds to practically solve the K-coverage problem in a barrier coverage context through the deployment of a hybrid sensor network where both static and mobile sensors are involved. The project also studies the issue of distributed optimization for "in-network" data processing among a subset of sensors. We tackle the challenging problem of target detection and localization in crowds through the discovery of a new target model as compared to existing schemes, where we resolve the certainty of target nonexistence instead of the traditional resolution of the uncertainty of target existence. This approach is lightweight, energy-efficient, and robust where not only each camera node transmits a very limited amount of data, but that a limited number of camera nodes is used. Finally, a suite of auxiliary services has been developed to facilitate the deployment of applications in smart camera networks. For example, Uno is the first distributed storage system that explicitly addresses the challenges to store privacy sensitive data of users; LIPS represents the first piece of work that exploits state-based models for link prediction in sensor networks; and EDAL presents a highly energy efficient data collection protocol in wireless sensor networks, where it generates routes that connect all source nodes with minimal total path cost, under the constraints of packet delay requirements and load balancing needs. In summary, the project has addressed the fundamental challenges on the capability of SCNs on coverage estimation, and provide pragmatic guidance in the design of distributed algorithms for performing various vision tasks.

Agency
National Science Foundation (NSF)
Institute
Division of Computer and Network Systems (CNS)
Type
Standard Grant (Standard)
Application #
1017156
Program Officer
Thyagarajan Nandagopal
Project Start
Project End
Budget Start
2010-07-01
Budget End
2014-06-30
Support Year
Fiscal Year
2010
Total Cost
$411,000
Indirect Cost
Name
University of Tennessee Knoxville
Department
Type
DUNS #
City
Knoxville
State
TN
Country
United States
Zip Code
37916