Over the past twenty years multiscale methods like the discrete wavelet transform (DWT) have evolutionized signal processing; for example, wavelets lie at the core of the FBI fingerprint data base and the new JPEG 2000 image compression standard for digital photos. While wavelets are arguably ideal or near-ideal for analyzing and processing 1-dimensional (1-D) signals (price fluctuations of interest rates, for example), a surprising realization of the past few years is their inability to capitalize in a similar way on 2-D, 3-D, and higher-D signals containing "singularities" (edges and ridges in 2-D digital photos or 3-D videos, for example). The new, confounding aspect is "geometry": the singularities are typically localized along smooth, lower-dimensional manifolds. There is a great need for new theory and tools to exploit these geometric structures. Other than a few scattered promising results, however, such theory and tools do not exist today.

This project aims toward a unified theory and practical toolset for the analysis and processing of higher-dimensional piecewise smooth signals that feature geometric manifold structures. In particular,the research involves (1) new multiscale signal representations based on local, directional atoms having a clear geometric meaning with respect to the underlying manifolds; (2) new statistical models for these representations; (3) new multiscale processing algorithms that exploit the new representations and models; and (4) educational outreach to engage undergraduates, graduate students, and other researchers in this timely research area. Given recent successful preliminary results in 2-D and 3-D, there is significant potential for progress in these directions to eventually yield orders of magnitude better compression, approximation, modeling, and denoising performance in real applications in higher dimensions.

Project Report

MOTIVATION Sensors, signal processing hardware, and algorithms are under increasing pressure to accommodate ever larger and higher-dimensional data sets; ever faster capture, sampling, and processing rates; ever lower power consumption; communication over ever more difficult channels; and radically new sensing modalities. The overwhelming volume of data involved has led to a re-thinking of the standard approach to sampling and signal processing based on the Nyquist sampling theorem. The traditional approach to digital signal processing is to sample first (at at least twice the signal bandwidth) and then ask questions (process) later. But in an increasing number of important applications involving wideband signals and images, Nyquist-rate sampling results in a torrent of data, and so the first processing operation is to compress the samples to a more manageable rate or size by removing unnecessary redundancies and irrelevant information. Unfortunately, this strategy is wasteful in two important ways. First, high-rate samples are acquired at potentially great cost only to have most of the data discarded in the compression step (in many cases 80-90% or more). Second, even the most efficient compression algorithms involve many digital operations, which contributes to time latency and power consumption. In many other applications, Nyquist-rate sampling is impossible or prohibitively costly, resulting in aliased digital data. Wouldn't it be nice if we could sample more parsimoniously, acquiring only the relevant signal information, rather than sampling followed by compression? In particular, wouldn't it be nice if we could sample a wideband signal or image at its 'information rate' rather than at its Nyquist rate? Fortunately, the past several years have seen several advances in the theory of sampling and reconstruction that address these very questions. In this project, investigated Compressive Sensing, an emerging field based on the recognition that a small collection of nonadaptive (even random) linear projections of a compressible signal or image contain enough information for reconstruction and processing. Compressive sensing enables new data acquisition protocols that directly acquire just the salient information about the signal or image of interest. The implications are promising for many applications and enable the design of new kinds of analog-to-digital converters, tomographic imaging systems, cameras, and distributed processing and coding algorithms for sensor networks and content distribution. An intriguing aspect is the central role played by randomization. Compressive sensing fuses the theories of approximation, compression, inverse problems, and random matrices and is being developed by a multidisciplinary community of researchers from signal processing, applied mathematics, and computer science. OUTCOMES AND FINDINGS We developed a range of new theoretical results, algorithms, and practical implementations related to compressive sensing. THEORY AND ALGORITHMS: We worked to understand what properties a compressive sensing system must possess in order to be able to acquire data without losing information. We were able to relate these properties to a deep result from mathematics called the Johnson-Lindenstrauss Lemma. We also worked to understand what kinds of data (signals, images, videos, etc.) are capable of being compressively sensed. While most of the research community focused on so-called "sparse models" (such as audio signals that can be compressed by MP3 and images by JPEG), we were able to show that more general "structured sparse" data can also be sensed in this way. This extended the scope of compressive sensing significantly. Finally, we studied the video sensing problem, which is complicated by the fact that video data is ephemeral and we have only a small window of time to conduct any analysis. We developed a state-of-the-art framework for compressively sensing videos that can be compressed by a system like MPEG. IMPLEMENTATIONS: To prove and validate the compressive sensing theory, we developed a new "single-pixel" camera that employs a digital micromirror array and a single light sensor and yet can acquire high-resolution images and videos. Since the camera requires only a single photo detector, it can be adapted to image at wavelengths that are currently impossible with conventional CCD and CMOS imagers, such as the infrared and ultraviolet wavelengths and beyond. The single-pixel camera was widely reported in the wordwide news media, and received an MIT Technology Review magazine TR10 Top 10 emerging technology award in 2007. The technology has been licensed to the startup company InView Technologies to build low-cost, high-resolution infrared cameras.

Project Start
Project End
Budget Start
2004-10-01
Budget End
2012-09-30
Support Year
Fiscal Year
2004
Total Cost
$1,135,280
Indirect Cost
Name
Rice University
Department
Type
DUNS #
City
Houston
State
TX
Country
United States
Zip Code
77005