Sparse representations of a function are not only a powerful analytic tool but they are utilized in many application areas such as image/signal processing and numerical computation. The backbone of finding sparse representations is the concept of m-term approximation of the target function by the elements of a given system of functions (dictionary). Since the elements of the dictionary used in the m-term approximation are allowed to depend on the function being approximated, this type of approximation, known as nonlinear approximation, is very efficient when the approximants can be found. Nonlinear approximation seeks ways to approximate complicated functions by simple functions using methods that depend nonlinearly on the function being approximated. Recently, a particular kind of nonlinear approximation, namely, greedy approximation attracted a lot of attention in both theoretical and applied settings. Greedy type algorithms proved to be very useful in various applications such as image compression, signal processing, design of neural networks, and the numerical solution of nonlinear partial differential equations. The theory of greedy approximation is emerging now: some convergence results have already been established; many problems remain unsolved. The fundamental question is how to construct good methods (algorithms) of approximation. The purpose of the proposed research is to design and study general nonlinear methods of approximation that are practically realizable. The proposed research will develop algorithms that are provably efficient with respect to convergence and rate of convergence.

The goal of the proposed research is to carry out fundamental mathematical and algorithmic study to significantly increase our ability to process (compress, de-noise, etc.) large data sets. The main technique that will be used in achieving this goal is based on nonlinear sparse representations. Understanding how to process large data sets is one of the great scientific challenges of this decade. It is key to designing systems to efficiently analyze data and extract essential information. The scientific discipline, which studies the process of replacing large data by smaller and simpler data, is the approximation theory. It has a myriad of existing and potential applications in both the defense and civilian sectors. For instance, managing large data bases such as security data bases obtained through surveillance requires processing of the data sets in order to speed up extraction of significant features or specific information.

Agency
National Science Foundation (NSF)
Institute
Division of Mathematical Sciences (DMS)
Type
Standard Grant (Standard)
Application #
0554832
Program Officer
Joe W. Jenkins
Project Start
Project End
Budget Start
2006-06-01
Budget End
2009-08-31
Support Year
Fiscal Year
2005
Total Cost
$116,917
Indirect Cost
Name
University South Carolina Research Foundation
Department
Type
DUNS #
City
Columbia
State
SC
Country
United States
Zip Code
29208