This research centers on the problem of designing and analyzing matrix recovery algorithms using the breakthrough "approximate message-passing" (AMP) framework that was recently proposed in the context of compressive sensing. The matrix recovery problems considered in this project include matrix completion, robust PCA, dictionary learning, compressive sensing under matrix uncertainty, and multinomial classification, where the goal is to recover a pair of matrices from noisy observations of their product, where the observations may be warped by element-wise nonlinearities, and where the matrices may include structure such as group sparsity, non-negativity, and per-row/vector norm constraints. Such problems arise frequently across many subdisciplines of engineering, science, and medicine, and with ever increasing data sizes. Thus, there is a great need for an algorithmic approach that is accurate, robust, and computationally efficient.
In particular, the research involves 1) the extension of AMP methods from linear inference (as in compressive sensing) to bilinear inference, 2) the integration of message passing strategies inspired by turbo-decoding in communications that facilitate the exploitation of sophisticated structural priors, 3) the development of empirical-Bayesian methods that automatically tune the parameters modeling the signal prior as well as the observation prior, 4) the explicit treatment of uncertainty in the assumed model, and 5) the exploitation of the AMP state-evolution framework for rigorous performance analysis.