This EAGER aims to provide practical evidence of feasibility for a larger project called instance-optimal sampling. The instance-optimal sampling is a foundational framework for optimal representation of extreme-scale (scattered, unstructured, and structured) datasets. Using the dense polytope packing algorithms, the instance-optimal sampling framework develops strategies for sampling a given dataset at the optimally minimal sampling rate. The instance-optimal representation is derived based on the multidimensional notion of Nyquist frequencies; therefore, this approach is best complemented with the compressive sampling (CS) methods that exploit the sparsity of a dataset to reduce the sampling rate significantly below the Nyquist rate with no loss of information.

The main motivation in this research is that the synergy of compressive sampling and instance-optimal sampling would potentially allow the reduction of an extreme-scale dataset to sizes that are logarithmically proportional to number of samples in that dataset and linearly proportional to its sparsity. The research addresses the computational efficiency issue of sparse reconstruction for volumetric and time-varying datasets, which can lay the basis for applying CS to computer graphics problems. The main challenge is the computational cost of the reconstruction algorithm for 3-D or time-varying data. This research examines the feasibility of adopting a tensor-product approach to compressive sampling.

Project Report

This project was an exploratory project to examine the possibility of utilizing the emerging compressive sensing framework in the scientific data visualization pipeline. ?Our investigations demonstrated that the prospects of using compressive sensing framework in the context of volumetric data are quite attractive. For example, we managed to demonstrated that the a typical brain aneurysm MRI dataset that in the traditional sense requires an acquistion of 256x256x256 (≈16 million) samples, can be represented nearly perfectly with only 3% of the samples (acquired in the CS way). The reconstruction using the standard CS techniques takes about 7-8 hrs and our GPU enabled reconstruction managed to bring that down to about 40-45 minutes. There needs to be further theoretical investigations about more scalable reconstruction algorithms (beyond greedy and convex optimization methods) that could scale well for today's large-scale volumetric data. Another theoretical line of research that is promising is the design of optimal representational basis that could provide ideal sparse representation of volumetric data. Although these two lines of research pose significant challenges, these potential outcomes are truly significant and could enable visualization of large scale data on common computing platforms. Moreover, such a technology could enable interactive visualization of regular datasets on low-end (e.g., mobile) platforms.

Agency
National Science Foundation (NSF)
Institute
Division of Information and Intelligent Systems (IIS)
Type
Standard Grant (Standard)
Application #
1048508
Program Officer
Lawrence Rosenblum
Project Start
Project End
Budget Start
2010-09-01
Budget End
2012-08-31
Support Year
Fiscal Year
2010
Total Cost
$85,000
Indirect Cost
Name
University of Florida
Department
Type
DUNS #
City
Gainesville
State
FL
Country
United States
Zip Code
32611