PI: Srinivasa Narasimhan Carnegie Mellon University
In recent years, computer vision has seen significant advances in the core areas of image sensing and interpretation. This success has resulted in great demand for vision techniques in application domains ranging from intelligent transportation and security to oceanography (underwater imaging), to astronomy (telescope and satellite imaging), to even biology and medical systems (microscopic and medical imaging). Unfortunately, there is one fundamental hurdle that can stop vision from having successful impact in these areas --- the assumption that light propagates in a transparent medium (pure air) without any alteration. Thus, today vision systems fail to perform in the presence of light scattering by a wide range of particulate media, such as bad weather (fog, mist, haze, snow, rain), murky water, smoke, dust, smog and biological tissue.
This research is devoted to making computer vision successful in scattering media. In computer vision, image formation has been defined as "a geometric mapping from the 3D world to the 2D image", which inherently leads to loss of information. The PI strongly argues that light scattering must not be viewed as "noise" that a traditional vision algorithm needs to overcome, but rather as a new form of "encoding" of light and hence, the images themselves. The key idea then is to derive a series of compact physically based analytic (or semi-analytic) models for light transport to represent image formation in scattering media. These models encode the "lost third dimension" back into images. The analytic forms of the models --- though not as elaborate as the slow simulations in computational physics --- are accurate enough to model the aggregate scattering effects in images and thus make it possible to invert light transport. The inverse light transport methods will then be applied in conjunction with traditional vision algorithms to match their performances in clear air.
The results from this research will have broad and long-term impact across a wide variety of domains. The (semi-)automatic intelligent transportation systems that assist drivers in navigation will be able operate in common bad weather conditions such as fog, snow and rain, indeed when they are most required. Similarly, field robots will navigate better in hazardous environments such as smoke and dust. Underwater exploration, safety, and rescue tasks can be made possible in murky underwater conditions. Understanding optical properties of tissues can assist doctors in medical diagnosis of tumors and cancers. Finally, the derived models can be used to also add realistic effects of scattering to imagery for digital entertainment (movies and video games), scientific education and training.
Navigating in poor visibility conditions such as bad weather (fog, mist, haze, rain, snow, hail), dust, smoke and murky waters can be dangerous. Bad weather causes thousands of crashes on the roads every year. Smoke and dust-filled environments hinder first responders and often cause health problems. Hence, it is imperative to develop autonomous or semi-autonomous systems to help us safely explore these environments. Unfortunately, traditional computer vision systems are not designed for these environments and hence perform poorly in many scenarios. A primary goal of this research is to investigate and establish models of how light propagates in such environments and to develop computational tools for improving visibility in imagery. In the past 6 years, the PI and his group have made strong strides toward this goal. The main outcomes include: (1) Designing and controlling lighting so that the images captured have good contrast even in poor visibility environments. (2) Algorithms for post-processing captured images to enhance scene contrast in bad weather. (3) A proof-of-concept design for a smart vehicular headlight that could potentially help increase visibility for drivers at night during snow and rain storms. (4) A proof-of-concept design for flexible control of fog lamps to increase visibility in misty and foggy environments. (5) As a by-product, the algorithms developed also estimate the (relative) distances of objects in the scene. (6) As a second by-product, these methods can be inverted to add bad weather effects to images and videos (say, for training personnel or for entertainment) as well as to design unconventional 3D displays. Besides scattering in a medium such as bad weather, light propagates through a scene in complex ways including inter-reflections between different scene points and diffusion beneath the surface of translucent materials like skin and marble. So, in addition to enhancing visibility in bad weather and murky waters, a secondary goal of the research is to understand how traditional methods are affected by light transport and how to correct them. The main outcome here is the development of new algorithms for estimating surface shape and depths of complex materials in scenes with many light transport mechanisms, using structured light patterns and defocus/focus cues. The PI and his group have received four best paper or honorable mention awards for the research conducted under this grant. The PI has advised and graduated 5 PhD students and 2 Post-docs who were partly involved in the various sub-projects and are now in academic and industry research positions. The PI has developed new courses both internally at Carnegie Mellon University and externally at multiple prestigious conferences. The PI organized a Syposium on Volumetric Scattering for Computer Vision and Graphics to bring researchers from many different fields to share knowledge regarding light scattering problems. The PI also chaired the IEEE International Conference on Computational Photography, which is a new and upcoming venue for publishing research on novel imaging and illumination technologies. Due to the dissemination efforts of the PI, the problem of navigation in bad weather has gained visibility and traction in academia and the industry, spurring significant new research in the area.