Many applications in signal and image processing and imaging depend critically on sparse representations for natural signals or images. This research addresses the development of improved sparse representations that are directly adapted to the data, rather than being fixed a priori by general theoretical considerations. Such data-driven learning of sparse structure has been finding broad applications. The improvements over fixed representation are especially significant for high-dimensional data. This research aims to overcome limitations of the current methods by reducing the computation to enable scaling to big data problems, improving robustness and predictability of outcome, and developing a theory quantifying the expected performance and the factors affecting it. Applications are foreseen in all areas of science and engineering, including medical diagnostics, multimedia, defense, manufacturing, communications, database retrieval, and data analytics.
In particular, this research leverages a new formulation recently introduced by the PI for data-driven learning of "sparsifying transforms", which are relatives of analysis dictionaries. Initial results in image denoising show slightly better PSNR than with learnt synthesis dictionaries, but at orders of magnitude less computation. Theory predicts better scaling with exemplar size, and experiments demonstrate robust convergence irrespective of initialization. Specific objectives of this research are: (1) develop theory and scalable algorithms for learning sparsifying transforms; (2) develop theory for joint learning of sparsifying transforms and signal recovery in compressed sensing and other inverse problems; and, (3) demonstrate key large-scale applications with real data. These applications include: (i) denoising, restoration, and compressed sensing of 3D and 4D data in magnetic resonance imaging, in computerized tomography, in microscopy, and in video; and (ii) image classification and recognition.