This Small Business Innovation Research (SBIR) Phase I project investigates the feasibility of a compact, robust, low-cost optical imaging sensor system capable of acquiring three-dimensional (3D) information from a scene with high precision and accuracy, superior to current commercialized technologies. The sensor provides, from a single shot, an image as well as distance information; associating each object feature with its precise 3D location. With the advances in sensing technology, 3D information is increasingly incorporated into real-world applications, from manufacturing to entertainment and security. However, the extraction of high-resolution 3D information remains challenging without the use of active illumination, a critical requirement for compact, low power applications. Existing passive illumination solutions are essentially based on triangulation or defocus effects. Methods based on triangulation suffer from occlusion and correspondence problems. On the other hand, the accuracy of depth estimation based on defocus effects is essentially limited by the depth of field of the imaging system. Novel light-field cameras extend the approach to multi-apertures at the expense of an inherent loss in resolution. The proposed sensor system overcomes these limitations and fundamentally provides improvements in depth resolution while being fast, parallel, compact, lightweight, and scalable.

The broader impact/commercial potential of this project stems from the possibility of obtaining real-time 3D depth information with high depth resolution using a compact and robust sensor system. Imaging sensors are now widespread and inexpensive, and so is computing power, already an integral part of most cameras. Demand for 3D imaging is rapidly expanding and is the burgeoning sector of the 40 billion dollar imaging market. The proposed system paradigm can be integrated with existing optoelectronic hardware that can be mass produced at low cost. Such a cost effective approach in conjunction with the unique capabilities of the 3D computational imaging sensor could enable disruptive applications for manufacturing, robotics, human-machine interfaces, and emerging 3D scanners. These applications are not always possible to implement with existing low resolution or active illumination methods. The company is generating synergistic industrial partnerships to accelerate the transfer of discoveries into applications.

Project Report

The world we live in is 3D. However, most imaging devices are only capable of capturing the 2D information throwing away all information about the depth of the scene. Recent advances in sensing technology have led to 3D information being increasingly incorporated into real-world applications - from architecture to entertainment, manufacturing, and security. The extraction of 3D information has been studied for decades, but remains a challenging problem, in particular under unconstrained environments that can include variable lighting, specular and deforming scene surfaces, and occluded objects, among other things. A major challenge to date has been integrating depth information with increased resolution and depth range. There are several existing architectures for estimating depth that rely on methods like triangulation (capture two images from different viewpoints), time of flight (measure amount of time that light takes to bounce back from an object), light-field (map the direction and position of light rays from each object point), etc. To date these techniques – each with their own relative strengths - have required users to trade off precision of the image captured with the depth range. These different techniques are additionally constrained by hardware complexity, size, power consumption or cost to be applicable for broad adoption. The SBIR Phase I project successfully demonstrated the feasibility of a computational optical imaging sensor capable of acquiring three-dimensional (3D) information from a scene with high precision and accuracy. The system simultaneously provides a brightness map (a 2D image) as well as distance information (a depth map); so that each object feature within a scene is associated with its precise location in 3D space. The sensor system provides more than one order of magnitude improvement in depth resolution with respect to conventional imaging systems. The system is based on a novel integrated computational optical imaging paradigm based on 3D helical point spread function (PSF) engineering and matched reconstruction processing. The project developed unconventional optics, digital signal processing, and novel system analysis/synthesis techniques to achieve characteristics not obtainable from traditional imaging designs based on lenses, gratings, etc. In other words, by implementing a joint design the system is capable of breaking the classical fundamental limits imposed by separate optical, detector array, and processing design. The system is fast, compact, and scalable. The data it provides is amenable for 3D rendering, refocus at multiple depths, and other higher level operations. The possibility of obtaining 3D depth information with a significant improvement in depth resolution in parallel over a wide 3D field in real time opens up a numerous possibilities for commercial application. Moreover, the sensor is amenable to mass production at low cost, enabling applications in areas such as robotics, 3D scanners, advanced manufacturing, and human-machine interfaces.

Agency
National Science Foundation (NSF)
Institute
Division of Industrial Innovation and Partnerships (IIP)
Type
Standard Grant (Standard)
Application #
1346142
Program Officer
Muralidharan S. Nair
Project Start
Project End
Budget Start
2014-01-01
Budget End
2014-12-31
Support Year
Fiscal Year
2013
Total Cost
$150,000
Indirect Cost
Name
Double Helix LLC
Department
Type
DUNS #
City
Boulder
State
CO
Country
United States
Zip Code
80302