Traditional, dynamic scenes are captured as video frames sampled at regular space-time grids. For many computer vision tasks, however, this uniform sampling may be either inefficient (e.g., low light, high-speed motion) or unnecessary (e.g., motion/change/event detection). This project explores non-uniform, adaptive sampling schemes that exploit the underlying structures of space-time volumes (e.g., sparsity, temporal coherence, statistical priors). These sampling schemes are implemented with novel programmable pixel-wised coded exposure and aperture in cameras. The captured information-rich coded projections of space-time volumes are used for video reconstruction or directly as features for motion/event detection. In addition to higher efficiency in imaging and higher signal-to-noise ratio in reconstructed results, the method also provides benefits in data security and privacy protection for video surveillance because decoding the captured images requires the knowledge of coded patterns and dictionaries.

This research has many applications in surveillance, machine vision inspection, and high-speed imaging. The developed technology is being tested in transportation imaging for traffic monitoring and accident detection. A database of high-speed videos of traffic scenes and events is being captured and plan to be released online when it is finished. In addition to videos, the technical approach can also be applicable to other high-dimensional signals such as light fields or light transport matrices.

Project Report

We redesigned and optimized the optical system of the pixel-wise coded exposure camera, with the goal to increase light throughput and obtain better alignment between LCoS and camera sensor. This allows us to use the camera in outdoor scenarios. We also fully characterized the imaging system. We performed thorough analysis of the proposed sparse reconstruction algorithms in the following aspects. (1) We evaluated the relative importance of coded sampling and dictionary-based sparse representation. We found both factors are important for sparse reconstruction and the coded sampling is relatively more important. (2) We compared our algorithm with other recent related work including P2C2 and space­ time interpolation. We found our method performs the best with a single input image. (3) We investigated the problem of dictionary learning. We proposed a method to visualize the items from the dictionary based on spatial variation and temporal variation. Using this visualization as a guide, we developed algorithms that adaptively augment the learned dictionary for reconstruction. We also studied the representation and space of camera spectral sensitivity. Based on this representation, we proposed a novel method that recovers camera spectral sensitivity and unknown daylight illumination from a single image. In addition to coded exposure in cameras, we applied the same methodology of using discriminative patterns in coded illumination for material classification based on surface texture and spectral reflectance. The proposed algorithm has high efficiency and high SNR due to light multiplexing. We applied the method for automatic sorting scrap materials for recycling and obtained promising results. We evaluated the performance and usability of a Digital Micromirror Device (DMD) for adoption as a coded aperature for use in a single pixel camera system. Suitable electronics to operate a DMD in such a system are being developed. An investigation into the scattered and diffractive light effects of such a device were investigated. The work has made possible the development of a camera that uses a single pixel detector and a programmable optical mask to generate high resolution images rather than megpixel arrays of detectors and a single optic.

Project Start
Project End
Budget Start
2012-09-15
Budget End
2014-08-31
Support Year
Fiscal Year
2012
Total Cost
$91,512
Indirect Cost
Name
Rochester Institute of Tech
Department
Type
DUNS #
City
Rochester
State
NY
Country
United States
Zip Code
14623