Mathematically speaking, the project deals with Geometric Measure Theory (GMT) and Harmonic Analysis. More specifically, we study questions on the interface of both these fields, and use harmonic analysis techniques to study questions in GMT. We study problems of the form "Find a biLipschitz map from a significant part of a given metric space to Euclidean space", "Characterize biLipschitz images of the Euclidean plane" or "When is a metric space built from biLipschitz images of standard pieces and how do we find these pieces?" On the harmonic analysis side, this is related to questions of the form "When is a function decomposable into a sum of nice functions, and how do you construct this decomposition?" This last question is a standard one in Littlewood-Paley and wavelet analysis, and transferring the methods from these rich theories into the setting of GMT yields a significant toolbox. This type of study of GMT is very related to many questions that arise in applications. In many applications one is given a large data set represented as a subset of a metric space, such as a high dimensional Euclidean space, and one seeks to faithfully represent a large portion of this data set as a subset of a low dimensional Euclidean space. Faithfully here, means that one can still perform the same data mining tasks on the image of the data portion. It is because of this connection to data mining that the above task has thus far yielded much attention from computer scientists and applied mathematicians using a wide range of approaches. The framework of dimensionality reduction also includes data compression and data approximation. These have applications in many areas of science; for examples document analysis, face recognition, clustering, machine learning nonlinear image denoising, segmentation and processing.
The project is geared towards a better understanding of the geometry of collections of points in a given space. In many applications one is given data which we think of as points in a metric space, and we want to map them to a low dimensional space which we understand better (such as a low dimensional Euclidean space). This data-mining task is typically called dimensionality reduction, and this framework includes data compression and data approximation. These have applications in many areas of science; for examples document analysis, face recognition, clustering, machine learning nonlinear image denoising, segmentation and processing. A key point is that quite often the data enjoys nice geometric properties and has more structure then a random set of points would have. One can study these geometric structures and use them to perform data mining tasks. There is a rich mathematical theory behind the study of such geometric structures, and this is what we develop.