How can you photograph objects beyond the line of sight? How can you recover bidirectional reflectance of materials from a single viewpoint? These seemingly impossible tasks are possible by considering the finite speed of light and using a new type of computational photography called, Femto-Photography. New advances in ultra-fast imaging provide tremendous new opportunities in modeling, representing and synthesizing light transport in computer graphics and computer vision. Research in computational photography and scene understanding will benefit by analyzing the transient response of the scene to extremely short duration active illumination. Traditional imaging uses steady-state response where the global illumination has reached an equilibrium state. The investigators are developing a new theoretical framework for transient light transport and are addressing inverse problems using time-resolved imaging. The investigators have recently developed the first physical demonstration of hidden geometry recovery.

The research aims to develop a new branch of computational imaging by developing a mathematical framework for studying higher dimensional light transport that exploits time-resolved imaging. This research brings ultra-fast imaging in the realm of computer graphics/vision and computational photography. The finely sampled time-dimension provides a range of research directions for modeling and measuring geometry and photometry of scenes that were previously considered beyond the reach of traditional machine vision. The techniques for time-resolved imaging exploit multiplexing, sparsity-exploiting reconstructions, state-space formulation, system identification methods and parameterized reflectance models in novel ways. Overall, the research pushes the boundaries of light transport based methods by an extra (time) dimension and hopes to show that forward and inverse problems in 5D light transport can inspire the next generation of imaging hardware and algorithms.

Project Report

Exploring femto-photography What does the world look like at the speed of light? Our new computational photography technique "femto-photography" allows us to visualize light in ultra-slow motion, as it travels and interacts with objects in table-top scenes. With femto-photography imaging technique we have captured and visualized the propagation of light. With an effective exposure time of 2 picoseconds per frame, this is equivalent to half trillion frames per second. (Figure 1) We have re-purposed modern imaging hardware to record an ensemble average of repeatable events that are synchronized to a streak sensor, in which the time of arrival of light from the scene is coded in one of the sensor’s spatial dimensions. Our research outcome with regard to femto-photography is: 1- Exploiting the statistical similarity of periodic light transport events to record multiple, ultrashort exposure times of one dimensional views and introduction to a novel hardware implementation to sweep the exposures across a vertical field of view, to build 3D space-time data volumes. 2- Creating techniques for comprehensible visualization, including movies showing the dynamics of real-world light transport phenomena (including reflections, scattering, diffuse inter-reflections, or beam diffraction) and the notion of peak-time, which partially overcomes the low-frequency appearance of integrated global light transport. 3- Development of time-unwarping technique to correct the distortions in captured time-resolved information due to the finite speed of light. Our work has potential applications in artistic, educational, and scientific visualizations; industrial imaging to analyze material properties; and medical imaging to reconstruct subsurface elements. Applications in looking behind diffusers Imaging through complex media is a well-known challenge, as scattering distorts a signal and invalidates imaging equations. For coherent imaging, the input field can be reconstructed using phase conjugation or knowledge of the complex transmission matrix. However, for incoherent light, wave interference methods are limited to small viewing angles. On the other hand, time-resolved methods do not rely on signal or object phase correlations, making them suitable for reconstructing wide-angle, larger-scale objects. We have generalized (Naik et al. JOSA A, 31, pp. 957-963 , 2014) the technique to reconstruct the spatially varying reflectance of shapes hidden by angle-dependent diffuse layers. The technique is a noninvasive method of imaging three-dimensional objects without relying on coherence. For a given diffuser, ultrafast measurements are used in a convex optimization program to reconstruct a wide-angle, three-dimensional reflectance function. This configuration offers significant challenges, including loss of light due to specular reflection of the diffuser and the non-negligible diffuser thickness, which both affect the intensity profile of the recorded images. Solving this problem for arbitrary geometries is the first step (Figure 2) towards general time-resolved imaging in turbid media. Figure2 shows our measurement setup and raw measurement streak images for a synthetic scene behind a diffuser. Figure 3 shows the results of reconstruction that is based on time resolved measurements of scattered light fed into a numerical inversion algorithm to reconstruct the wide-field, spatially varying reflectance of a three dimensional scene through a scattering layer. The method does not rely on the memory effect or coherence, but instead utilizes computational optimization techniques that are suitable for large objects. The method has potential use for biological imaging and material characterization even behind diffusive layers. Time profile analysis Time-resolved sensing exploited significantly in communications and in biological imaging. However, only recently have researchers begun to exploit advanced signal processing and inversion techniques that are common to computer science and mathematics. On the other hand, time-resolved imaging is new to computer vision and graphics researchers, who now have a new parameter for existing applications. Therefore, it is crucial to merge work in the two communities with a unified approach. In order to understand this process, we have analyzed time-resolved light fields and explored their information content and cross-dimensional information transfer, the research outcomes are: 1- Analysis of free-space light propagation in the frequency domain considering spatial, temporal, and angular light variation. Analytical modeling of propagation in frequency space as a combination of a shear in the light field and a convolution along the angular frequencies. 2- Finding information-preserving properties of propagation in free space by coupling information among different dimensions. We have driven upper bounds for how much of the information contained in one dimension is transferred to those that are measured. 3- Based on this analysis, we introduced a novel, lensless camera. This approach exploits ultra-fast imaging combined with iterative reconstruction, while removing the need for optical elements, such as lenses or masks. The proposed frequency analysis gives upper bounds on the depth of field of a proposed lensless camera. 4-With synthetic scenes and an experimental prototype camera, we have demonstrated and evaluated the proposed computational imaging approach.

Agency
National Science Foundation (NSF)
Institute
Division of Information and Intelligent Systems (IIS)
Type
Standard Grant (Standard)
Application #
1115680
Program Officer
Ephraim P. Glinert
Project Start
Project End
Budget Start
2011-08-01
Budget End
2014-07-31
Support Year
Fiscal Year
2011
Total Cost
$499,999
Indirect Cost
Name
Massachusetts Institute of Technology
Department
Type
DUNS #
City
Cambridge
State
MA
Country
United States
Zip Code
02139