Single-photon avalanche diodes (SPADs) are an emerging sensor technology capable of detecting individual incident photons and capturing their time-of-arrival with picosecond precision. Due to their high sensitivity and time resolution, SPADs are driving an imaging revolution, enabling extreme applications that were hitherto considered impossible: imaging at trillion frames-per-second, non-line-of-sight imaging, and microscopic imaging at nano time-scales. Despite these capabilities, SPADs are considered specialized devices used only in ultra-dark environments and restricted to a limited set of niche applications. This project develops technologies to expand the scope of SPADs as general-purpose cameras with a broad range of applications. The developed technologies will not only spur wide-spread adoption of SPADs in fields like life science, astronomy, and medicine, where operating with the smallest amount of light possible is critical to success, but also enable new, highly demanding applications. 3D cameras will be able to achieve substantially higher depth resolution than current state-of-the-art at long distances, enabling vehicles (aerial, terrestrial, and underwater) to navigate autonomously in challenging weather conditions and on rugged terrains. The results from this research will be disseminated through scientific conferences and journals. Some materials will be integrated into a textbook on active 3D imaging techniques.

This research develops mathematical and physical foundations for a new class of imaging and computational techniques which will transform SPADs into `all-purpose' cameras capable of operating in diverse conditions (dark to bright sunlight), for recovering high-quality images and 3D scene information over the entire gamut of imaging conditions. This project develops (a) coded and asynchronous single-photon imaging, two novel families of active single-photon imaging techniques that minimize non-linear distortions and can reliably operate in high-flux environments; (b) single-photon computational imaging techniques for capturing scene intensity under passive, uncontrolled lighting (e.g., sunlight); and (c) novel machine learning algorithms for extracting high-level scene information from single-photon sensor data, based on spiking neural networks, enabling rapid, power-efficient scene understanding in dynamic environments, on low-power devices.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

Agency
National Science Foundation (NSF)
Institute
Division of Information and Intelligent Systems (IIS)
Application #
1943149
Program Officer
Jie Yang
Project Start
Project End
Budget Start
2020-09-01
Budget End
2025-08-31
Support Year
Fiscal Year
2019
Total Cost
$107,654
Indirect Cost
Name
University of Wisconsin Madison
Department
Type
DUNS #
City
Madison
State
WI
Country
United States
Zip Code
53715