Prediction and estimation of a process, called a signal process, given a relevant process, called a measurement process, both of which usually involve randomness, is a fundamental problem in a broad range of fields. As the signal evolves and measurements keep coming in, an algorithm is needed to predict or estimate the signal at each time instant using the measurement at the same instant to update the prediction or estimate without requiring the use of the preceding measurements. Such an algorithm is called a filter. When the signal or measurement process is affected by an uncertain or changing environment, a filter that adapts to the environment is called an adaptive filter. In many applications, whether an uncertain or changing environment is involved, large individual errors in estimation or prediction may cause undesirable or even disastrous consequences and are to be avoided. A filter that can reduce large errors is called a robust filter. A robust filter must balance filtering accuracy and robustness. Optimal Filtering for nonlinear signal or measurement processes was a long-standing notorious problem until neural filters were proposed in 1992. Although neural filtering has many advantages over its main competitor, the particle filter, the local-minimum problem in training neural filters plagued the approach until now. The local-minimum problem has finally been overcome by a technique called the gradual deconvexification method developed under a recent NSF grant. Neural filters are now ready for application. The purpose of this project is to develop adaptive and robust neural filters.

In particular, the following filters will be developed:

(1) Accommodative neural filters. Properly trained RNNs (recurrent neural networks) with fixed weights are proven to have adaptive ability and are called accommodative neural networks. They are not adjusted online. This is an important advantage because the signal process is usually unavailable online for weight adjustment. An adaptive filter that is an accommodative neural network is called an accommodative neural filter.

(2) Adaptive neural filters with long- and short-term memories. If the nonlinear and linear weights of an RNN, which affect the RNN's outputs in a nonlinear and linear manner respectively, are used as long- and short-term memories (LASTMs) respectively, it has been proven that the long-term memory can be trained offline for different environments and only the short-term memory needs to be adjusted online to adapt to the environment. An adaptive neural filter that has LASTMs is called an adaptive neural filter (with LASTMs). Such filters are expected to have better generalization capability than accommodative neural filters.

(3) Robust neural filters. The risk-sensitivity index in the normalized risk-sensitive error (NRSE) criterion for training a neural network determines its degree of robustness. Depending on whether; being positive or negative, the NRSE averts larger "risks" or ignores "outliers" to induce robust engineering or robust statistical performance respectively. Existence of robust neural filters has been proven. It is also proven that as the risk-sensitivity index grows without bound, the NRSE approaches the minimax criterion.

(4) Robust accommodative neural filters. If both adaptive and robust performances are required of a filter and online adjustment of the filter is undesirable, then a robust accommodative filter can be used.

(5) Robust adaptive neural filters with long- and short-term memories. If both adaptive and robust performances are required of a filter and better generalization ability of the filter is desirable, then a robust adaptive filter with LASTMs can be used.

Project Start
Project End
Budget Start
2015-07-15
Budget End
2020-06-30
Support Year
Fiscal Year
2015
Total Cost
$340,736
Indirect Cost
Name
University of Maryland Baltimore County
Department
Type
DUNS #
City
Baltimore
State
MD
Country
United States
Zip Code
21250