In recent years, it has come to light that many signal estimation and detection problems in engineering, science, and statistics are significantly aided by modeling the signal as sparse in some basis. By now, a relatively comprehensive theory has been constructed for such signal models, yielding algorithms that give provably good performance even when sampling far below the Nyquist rate. The structure of real-world signals often goes beyond simple sparsity, though. For example, the wavelet coefficients of natural scenes are not only sparse, but also show persistence across scales of the wavelet tree. Recent investigations of structure within sparsity show that it can be exploited to yield gains in estimation performance, though existing results are somewhat limited. For example, existing approaches strive to find only the single "best" estimate, whereas many applications would like to know the set of all reasonable estimates along with relative confidence values, i.e., "soft" estimates.
This research investigates soft inference strategies leveraging a statistical modeling framework based on hidden state variables. Here, e.g., using binary states would facilitate a sparse signal model, and using Markov structures on the binary states would facilitate structured sparsity. In particular, this research investigates iterative and sequential Bayesian approaches to soft inference, building on state-of-the-art algorithms used in noncoherent communication receivers that go by the name of "turbo equalization" and "sphere decoding." This research also investigates fundamental issues in communication over sparse fading channels. While existing approaches have focused on the problem of find the "best" sparse channel estimate for subsequent use in a coherent decoding algorithm, communication theory instead prescribes a decoding metric based on model-averaging of soft sparse channel estimates.