Sparse decomposition algorithms adaptively expand a signal in terms of an over-complete set of finite-support functions called atoms that comprise a dictionary. These nonlinear algorithms aim to find a representation that is at once sparse, efficient, and robust, as well as informative and malleable. The investigators are modeling the energy content of sparse decompositions for a variety of signals in order to optimize performance. This work will lead to better adaptive atom selection strategies for sparse decompositions, as well as dictionaries that are more coherent to the intrinsic structures of a class of signals. It will result in a way to determine which terms of a sparse representation actually belong to the signal and which are artifacts of the decomposition. This research has implications for applications that rely on representations of waveforms such as geological and biomedical data analyses, content retrieval, source separation, music sound transformation, etc.
Sparse representations provide an attractive alternative to standard orthonormal expan- sions. One property of some algorithms for sparse representations is the creation of terms that are not physically meaningful, but reflect instead the greediness of the algorithm. The investigators refer to this phenomenon as dark energy because these terms are cancelled in the signal reconstruction. Although previous work addressing these spurious terms has viewed them as a nuisance, there is evidence suggesting that dark energy embodies useful information about the signal and its coherency with the dictionary. It can also provide a better strategy for choosing atoms or learning better ones. The investigators are exploring the nature of dark energy and its significance for the original signal, the dictionary, and the strategies used to generate decompositions.