The investigator develops new methods for analysing nonstationary time series and their properties. Many methods in time series are developed under the premise that the observations are stationary. This assumption simplifies both the estimation procedure and asymptotic analysis. However, in real life this assumption is often quite unrealistic. Ignoring nonstationarity in the data and treating the observations as if they were stationary, could give misleading conclusions. Therefore it is important to develop methods for dealing with data that is either temporally or spatially nonstationary. The investigator focuses on three areas where, in applications, nonstationarity can arise (i) statistical inference for time-varying ARCH-type processes (ii) nonstationary random correlated (stochastic) coefficient regression models (iii) analysis of spatially nonstationary spatio-temporal models. These are summarised below. The investigator develops methods which test or track structural changes in time-varying ARCH and GARCH processes. In order to develop sampling properties for the proposed methods, mixing of the time-varying ARCH-type processes is required, and the investigator studies the mixing properties of such processes. Random correlated coefficient regression (RCCR) models are often used to explain the nonstationarity seen in the data. Despite its advantages, until recently the statistical analysis of RCCR models has been quite limited. The investigator develops statistical sound and computationally efficient parameter estimation methods for RCCR models. Observations from spatio-temporal processes can arise frequently in several disciplines, and several factors could cause the observations to come from a spatially nonstationary process. The investigator investigates spatially nonstationary, spatio-temporal processes. In particular, the investigator considers methods which decompose estimates of the model into a global spatially stationary process, and an additional locally nonstationary term.
In several disciplines, it is assumed that the main character of data observed over time (usually known as a time series), for example volatility, is not influenced by time. This time invariance property is known as stationarity and it is often the underlying assumption in many current statistical methodologies, because stationarity can often simplify the analysis. However, statistical methods which overlook the nonstationarity can lead to misleading or incorrect conclusions. There are several real data examples where there is empirical evidence to suggest that stationarity is an oversimplification. A particularly pertinent example is global temperature anomolies, where there is plenty of evidence to suggest that both the average temperature and the variation have changed over the past 150 years. In this project we develop statistical methods for nonstationary time series, in particular to identify where changes have occured and factors which have caused the changes. By developing methods that do not ignore the nonstationarity, we are better able to understand the mechanisms driving the data, which leads to better forecasts. These methods can be applied a wide range of subjects, including economics (identifying factors behind the current credit crunch) and climatology (test whether the rise in CO2 levels, has an influence on the amount of variation in the global temperatures).
Data which is observed over time is usually called a time series, and can arise in disciplines as diverse as finance and the geosciences. The main factor that distinguishes a time series from other types of data is that there is a dependency in the data which, usually, diminishes the further apart the observations are over time. A correct analysis of a time series needs to take into account this dependency, but to simplify the analysis it is typically assumed that the time series is stationary, in other words the structure is `stable' and does not change over time. However, as demonstrated by the recent financial crisis, the assumption of stationarity/stabilitycan sometimes be extremelyprecarious. Analysis of a time series which does not take into account can, in best case, lead to unreliable measures of reliablity and in the worst caseincorrect conclusions. The objective of the project was to address some pertinent issuesrelated to the statistical analysis of nonstationary time series. The research conducted by the PI over the duration of the project took on three main strands; detecting nonstationarity, modelling nonstationarity and the theoretical analysisof nonstationary time series. In the case of detecting stationarity two issues were considered. One was to detect change points in the structure of a timesseries with specific application to financial time series (such as the FTSE). In this project it was found that the change points detected corresponded to important events in the recent financial crisis. Therefore, any statistical analysis which did not take into account these changes could have had severe repercussions. The methods developed in this part of the project were specifically aimed at detecting sharp changes in a the time series. However, changes can occur gradually over time. To detect these types of changes the PI developed a simple test for so called second order stationarity (a specific type of stationarity) based on a linear transformation of the data called the discrete Fourier transform. Once nonstationarity of the time series is established it is often necessary to predict future observations, this requires modelling the nonstationarity with a simple parsimonous model. To address this issue the PI showed that linear regression (which is the most widely used statistical models), where the coefficients were allowed to `vary' randomly could be used to model various types of nonstationarity. The underlying motivation behind this project was to develop simple methods, that a nonspecialist could use for them for the analysis of nonstationary time series. However, in order to show that the methods proposed gave reliable results it was necessary to investigate them from a theoretical perspect. In order to do this the PI developed some theoretical tools. These included showing that the dependency structure of various time series models diminished over time, therefore making them suitable models in various applications.