Sensor data is vital to the security, economy, and health of our nation. It comes in various formats and resolutions: visual, radar, lidar, laser scans, and multiple forms of medical imagery. There have emerged several approaches to the representation and visualization of point cloud data. These developments have typically been ad hoc, applying to one form of sensor data. This project will develop new generic processing platforms based on the following key ingredients. The quality of fit will typically not be made in the least squares sense but instead will be determined by metrics closely tied to the application domain such as variants of the Hausdorff metric. The representation platform will be a fusion of implicit (level set) methods together with multiscale (wavelet like) decompositions. The selection of optimal representations will be made with the aid of techniques from nonlinear approximation and learning theory. The proposed research will be applicable to generic point cloud data, but an emphasis will be placed on processing point cloud data that arise in applications such as autonomous navigation of robots and micro air vehicles, identification of sources of biological and chemical contaminants and climate modeling.

Information about the world in which we live is obtained through sensors. These include medical imagery (MRIs, CAT scans), navigational tools (Radar, LIDAR, Sonar), surveillance (satellite imagery, video). The data gathered by such sensors consist of an array of point values (called a point cloud). This data must be processed and visualized in order to extract the important information they hold. Such processing is done literally millions of times a day and how it is done determines the quality of the information. Most current methods for processing and visualizing sensor data are built on old ideas from image processing and fail to capture many of their important features such as their geometry and topology. This project proposes new sophisticated techniques from mathematics and computer science to create more effective data processing. The development will be made with an eye towards the critical issues of accuracy and the speed at which the processing and rendering take place. A particular emphasis in the proposal is on terrain data which are used daily in navigation, especially for robotics and unmanned air surveillance.

Project Report

Sensor data is vital to the security, economy, and health of our nation. It comes in various formats and resolutions: visual, radar, lidar, laser scans, and multiple forms of medical imagery. In the private sector, the emphasis is on high quality imaging which leads to dense point clouds. In security scenarios, the data is often sparse and incomplete and therefore may be of poor quality, yet has important embedded information. Nevertheless, decisions have to be made on the data available. Thus it is important to have accurate processing platforms to extract and visualize all salient information held by the point clouds. In many security scenarios, the processing must be done in real time. For example, in autonomous navigation of robots or micro air vehicles, large point clouds of terrain data are processed and information is extracted several times per second. Similarly time critical decisions are necessary in assessing risk, based upon the classification of chemicals in its compostiion, after the detection of a contaminant plume. There have emerged several approaches to the representation and visualization of point cloud data. These developments have typically been ad hoc, applying to one form of sensor data, and too often simply borrowed from image processing. Indeed, it is the contention of this project that the properties required of this representation in various applications are not always met. For example, for navigation, it is important to capture the inherent high order topology in terrain surfaces and have accurate fields of view, but this is missed in direct functional methods that view the surfaces only as the graph of a function. In many applications, vital information is held at different scales which begs for multiscale decompositions in the processing. This latter point is also important for compression and denoising. Another important point is that the evaluation of the performance of the processing as compared with ground truth is most often done with respect to a least squares fit. In many applications this metric is not commensurate with the intended application, e.g., in surveillance activities the field of view needs to be accurately retained which will be better met with other metrics such as a Hausdorff distance between surfaces. Other critical needs are to make the processing amenable to learning tasks such as regression and classification. Some applications take place in high dimensions and thereby needs special attention to avoid the curse of dimensionality in the computation. Although much of the research is applicable to generic point cloud data, an emphasis has been placed on processing point cloud data that arise in three specific targeted applications. First and foremost is terrain data (urban and natural) with the goal of autonomously navigating robots and micro air vehicles. Such ability is vital in threat detection, damage assessment and containment. Closely related to this is the identification of sources of biological and chemical contaminants which combines processing of point cloud data from several sensors and solving inherent inverse problems. Finally, processing atmospheric data for climate modeling was emphasized since this capability is useful for tracking the migration of airborne contaminants. In addressing the Intellectual Merits of this project, algorithms for displaying surface reconstructions from non-uniformly sensed point cloud data were developed with host of activities in mind, including the effective analysis of natural and adversarial threats. Previous attempts at designing such algorithms has been for the most part ad hoc and laden with serious deficiencies in many application domains. This project has developed a coherent mathematical theory and resultant algorithms for processing point cloud data to address these deficiencies, and has initiated the theoretical foundation for fast, quantifiable risk assessment in the classification of threats which has the strong potential of safely reducing false positives in threat detection.

Agency
National Science Foundation (NSF)
Institute
Division of Mathematical Sciences (DMS)
Application #
0915104
Program Officer
Leland M. Jameson
Project Start
Project End
Budget Start
2009-09-01
Budget End
2012-08-31
Support Year
Fiscal Year
2009
Total Cost
$450,001
Indirect Cost
Name
University South Carolina Research Foundation
Department
Type
DUNS #
City
Columbia
State
SC
Country
United States
Zip Code
29208