Many signals arising from physiological and physical processes are not only non-stationary but also posses a mixture of sustained oscillations and non-oscillatory transients that are difficult to disentangle by linear methods. Examples of such signals include speech, biomedical, and geophysical signals. For example, EEG signals contain rhythmic oscillations (alpha waves, etc) but they also contain transients due to measurement artifacts and non-rhythmic brain activity. This research program involves the development and application of new algorithms designed to decompose such signals into 'resonance' components - a high-resonance component being a signal consisting of multiple simultaneous sustained oscillations; a low-resonance component being a signal consisting of non-oscillatory transients of unspecified shape and duration. While frequency components are straightforwardly defined and can be obtained by linear filtering, resonance components are more difficult to define and procedures to obtain resonance components are necessarily nonlinear. It is envisioned that the decomposition of a non-stationary multi-resonance signal into resonance components will enable the more effective utilization of existing processing methods specialized to each component. For example, sinusoidal modeling of speech is most efficient and effective for signals consisting primarily of sustained oscillations (high-resonance signals). On the other hand, time-domain and wavelet-domain methods are most effective for piecewise smooth signals that are defined primarily by their transients or singularities (low-resonance signals). This research utilizes recent developments in signal processing, including sparse signal representations, morphological component analysis, constant-Q (wavelet) transforms with varying Q-factors, fast algorithms for L1-norm regularized linear inverse problems, and related algorithms. The research consists of developing algorithms for resonance-based signal decomposition and generalizations, and assessing their effectiveness for the processing of signals arising from several physical and physiological processes.
' has led to the development of new methods for processing data arising in biomedicine, engineering, and other fields where conventional signal processing techniques are inadequate. The algorithms, developed as part of this project, have been applied, for example, to the reduction of measurement artifacts in recordings of human brain activity, the detection of nano-particles in modern biosensors, the analysis of murmurs in cardiac recordings, the analysis of high-frequency oscillations for epilepsy treatment, the detection of spindles in sleep EEG, the processing of radar and seismic data, and the detection of defects in engines and gearboxes (early fault detection is important for the prevention of accidents and economic loss in industry and critical facilities). The newly developed methods also contribute to the advancement of the field of signal processing; they have been published as articles in selective research journals, and implemented in software made freely available on the Internet. As part of this project, technical tutorials have been written and contributed to the open-source online educational initiative, cnx.org. In addition, to enhance the translation of new signal processing techniques into biomedical engineering, and to provide educational opportunities for students, an annual symposia on 'Signal Processing in Medicine and Biology' was initiated [held in New York City on Dec 10, 2011, Dec 1, 2012, and Dec 7, 2013; to be held in Philadelphia on Dec 13, 2014 and Dec 12, 2015]. Some of the specific technical outcomes include the following. The Tunable Q-factor Wavelet Transform (TQWT): The analysis of many physiological time series, such as an electroencephalogram (EEG), calls for wavelet transforms with high Q-factors, meaning that the transform should be composed of highly oscillatory pulses. However, there is a scarcity of efficient discrete wavelet transform for which the Q-factor can be easily tuned. Consequently, it is common in biomedical processing to use the continuous wavelet transform, the computational expense of which rules out the most effective modern optimization-based signal processing algorithms. One outcome of this project is the TQWT, a fast discrete wavelet transforms appropriate for the analysis of highly oscillatory signals (e.g., EEG). The TQWT provides a unique capability among existing transforms — relevant parameters (Q-factor and over-sampling rate) can be explicitly specified and its implementation is computationally efficient. The TQWT makes resonance-based signal decomposition practical by providing a fast implementation. The TQWT can be used for denoising, restoration, extrapolation, AM/FM decomposition, and other basic problems in signal processing. While originally motivated for the processing of physiological signals, the TQWT is very broadly applicable to signals comprising sustained oscillations. Simultaneous Low-Pass Filtering and Total Variation Denoising (LPF/TVD): This algorithm provides a new approach to exploit simultaneously the advantages of conventional linear filtering and of nonlinear sparsity-based signal processing. This is useful because linear filtering methods will likely always be more commonly used and widely applicable than methods based on sparsity. (Sparsity methods require certain signal model assumptions to be satisfied and are more involved algorithmically.) The new LPF/TVD algorithm is based on a non-smooth convex optimization, requires few parameters, and has high computational efficiency. The computational efficiency is due to the way in which the LTI filter transfer function is incorporated into the objective function. Further, the tuning parameters are few in number, which makes the approach manageable, yet they provide sufficient flexibility so as to capture a rich set of signal behaviors. Transient Artifact Reduction Algorithm (TARA): As an extension of the LPF/TVD algorithm, TARA is a new method for the reduction of artifacts in biomedical time series. Compared to LPF/TVD, the algorithm is faster and its capability is more general. The algorithm, based on a signal model involving sparsity in part, is able to reduce artifacts due to subject motion in near-infrared spectroscopic time-series imaging. Overlapping Group Sparsity (OGS) Algorithm: Sparsity, in its simplest form, is not realistic for some types of signals. The performance of sparsity-based algorithms are therefore enhanced through the use of models for 'structured' sparsity. For many signals (or signal representations), high-amplitude values arise in sparse clusters or groups; for example the spectrogram of speech is more accurately described as exhibiting 'group sparsity' rather than 'sparsity'. While the use of group sparsity has been implicitly used in much signal processing work over the last 10 years, the explicit use of group sparsity is relatively recent. However, solving group sparsity problems is significantly more involved because of the coupling of each variable to every other variable. This project has led to a new fast iterative algorithm for group-sparsity denoising and a simple procedure to select the hyper parameter (regularization parameter). The new algorithm has been shown to be effective for speech denoising.