The project will investigate the estimation of volatility-like objects in the context of the hidden semi-martingale model. Apart from volatility, we are concerned with co-variations, ANOVA, leverage effect, and related quantities. The project uses ideas from contiguity and unbiased estimation to find such estimators. Data are assumed to have high frequency, so that small-interval asymptotics will be used. A main part of the project is concerned with the applications of such estimators. The investigator's earlier findings on nonparametric, trading based, risk management for options will be interfaced with the estimators to find complete procedures for safely unwinding dangerous positions. The estimators will also be combined with forecasting techniques to provide high-frequency based competitors to latent volatility models like GARCH. We can here draw on the martingale type error structure in the high frequency estimation. The economic value of the estimators in terms of portfolio management will also be investigated.

A main background for the project is the increasing availability of high frequency data for financial securities prices. This permits, in principle, very precise determination of volatility and similar characteristics of prices. The investigator's finding, however, that prices behave as if they have measurement error, raises a number of questions about how the statistics is carried out. This project will be concerned with both estimation, and applications to risk management, forecasting, portfolio management, and regulation. The results are of interest to investors, regulators, and policymakers.

Project Report

Recent years have seen financial instruments traded at increasing speed. This has created an explosion in the availability of high frequency financial data. The project has been concerned with the question of how to turn this data into knowledge. The following are some of the highlights of our findings. The intellectual merit of the activity. (1) The project has established that many features of the data can be summarized in estimators for the following quantities: volatility, skewness, and the volatility of volatility itself. These qualities can be estimated with great precision ("consistently") for short time periods, varying from five minutes to a few weeks. We conjecture, and have gone some way towards mathematical proof, that these three quantities in fact exhaust the short run information in a single data series. Similar results hold for high dimensional series. (2) The project merges two traditionally separate areas of research, namely statistical inference, and the theory of fair games ("martingales"). Traditional finance theory relies on mathematical results to the effect that securities prices are in the short run indistinguishable (in the sense of being "semi-martingales") from fair games. We show that this is not the case in most data, and that prices have an error, which is similar to statistical measurement error (see Figure 1 for an example). Estimators are thus developed and analyzed in this more general setting, which we refer to as a "latent" semi-martingale model. (3) In a very general development, we have shown that the high frequency data can be analyzed as if the volatility is constant over small periods of time, with a statistical ("likelihood ratio") ex-post correction. This result connects with the statistical concept of contiguity, and permits the use of parametric inference methods. (4) Our results are of a mathematical nature, and are applicable to other areas of research where high frequency data occurs, such as neural science, or climate science. The broader impacts resulting from the proposed activity. The data are mostly financial. Our way of capturing the data in a low dimensional set of summary quantities greatly increases transparency in financial markets. This is helpful to investors, regulators and policymakers alike. In particular, the summaries (1) help risk management, (2) help to monitor and understand high frequency trading, and (3) also provides empirical measurements of volatility, skewness, etc, which are traditionally obtained as "implied" quantities by marking options to market (a process which is subject to additional market sentiment, and to modeling). On the latter point, our estimates provide a corrective to derivatives pricing, something which is useful given market events over the last several years. It also shows that there is some diversity in the precision with which different parameters can be estimated or otherwise pinned down. -- There is furthermore a tie to macroeconomic and long run quantities, and this will also help to shed light on basic economic processes. Results are also be useful in biological science settings. Graduate students have been trained through working with the PI on the project, and most of them remain in the university sector. In summary, the project can be seen as a mathematical and statistical exploration of high frequency recordings of a stochastic process. On the other hand, it has brought mathematical innovation to the study of markets at the current time of needing to reinvent financial education and regulation. We believe that a better integration of empirical and theoretical financial techniques will help protect against some of the mistakes that caused the recent financial crisis. Our research continues.

Agency
National Science Foundation (NSF)
Institute
Division of Mathematical Sciences (DMS)
Application #
0604758
Program Officer
Gabor J. Szekely
Project Start
Project End
Budget Start
2006-09-01
Budget End
2012-08-31
Support Year
Fiscal Year
2006
Total Cost
$220,000
Indirect Cost
Name
University of Chicago
Department
Type
DUNS #
City
Chicago
State
IL
Country
United States
Zip Code
60637