The number of cameras in our lives and the scale of camera systems are continuously increasing as technological advances and falling prices in camera systems create new opportunities and applications. In addition to personal uses, cameras are widely employed in military, public and commercial applications for surveillance and statistics gathering. There are an estimated 30 million surveillance cameras in the U.S. capturing 4 billion hours of footage a week. Besides the traditional use of cameras for surveillance purposes, projects such as Google Glass are driving the development of miniature and low-cost cameras with local processing and communication capabilities. For future camera systems, local intelligence and autonomous collaboration among components will provide the capability to solve more complex tasks, which requires a unifying perspective to simultaneously address the challenges of hardware/software co-design, real-time operation, high accuracy and self-coordination and self-adaptation in run-time.

This project provides a holistic and novel approach for the design, deployment and self-coordination of a set of collaborative embedded smart cameras, with the goal of monitoring large areas with the highest accuracy and smallest latency. One objective is designing synthesis approaches and computing infrastructure for the embedded smart cameras that allow hardware restructuring and systematic swapping of tasks between hardware and software on-the-fly. Another objective is to develop self-configuration approaches to autonomously adapt system behavior and optimally deal with run-time environmental changes, including node failures.

This research is expected to enable development of new real-time, fully automated, collaborative and highly accurate camera systems by providing a systematic approach for the design and deployment of such systems, and testing new methods at laboratory and campus scales. Potential applications include smart surveillance systems, multi-camera-based driver assistance systems, assistance in nursing homes, quality control on production lines based on 3D reconstruction, and remote surgery. The project also integrates research with the undergraduate and graduate programs of two institutions and contributes towards increasing the involvement of under-represented groups through the University of Arkansas Engineering Career Awareness Program, Arkansas Louis Stokes Alliance for Minority Participation and George Washington Carver Project, and the WiSE program at Syracuse University. Students from under-represented groups are to be recruited and involved in the design, implementation, and deployment of collaborative multi-camera networks.

Agency
National Science Foundation (NSF)
Institute
Division of Computer and Network Systems (CNS)
Type
Standard Grant (Standard)
Application #
1302559
Program Officer
M. Mimi McClure
Project Start
Project End
Budget Start
2013-10-01
Budget End
2017-12-31
Support Year
Fiscal Year
2013
Total Cost
$340,766
Indirect Cost
Name
Syracuse University
Department
Type
DUNS #
City
Syracuse
State
NY
Country
United States
Zip Code
13244