Forecasting is a central activity to economic development. Consumers, governments, and firms base their decision partly on professional forecasters. In complex environments, self-declared experts both volunteer and sell predictions on a plethora of subjects like health, the economy, the stock market, politics, and the weather. However, it is not easy to determine whether professional forecasts are a product of any relevant knowledge. Hence, a basic question is whether (and how) it is possible to distinguish the predictions of knowledgeable experts from the predictions of impostors.

This leads to the question of how to empirically test forecasts. This project uses tools from economic theory to consider one particular problem with forecast testing. If professional forecasts must be empirically tested, then this may induce the experts to forecast strategically to pass the test. This is problematic; testing forecasts (because we do not know the quality of the experts) may induce the experts to misrepresent what they know. This research project considers whether or not many empirical tests (including tests that are frequently used in practice) can be manipulated by fraudulent experts.

In particular, the research studies whether or not it is possible to construct forecasting strategies, based on no relevant knowledge, that are guaranteed to pass the test no matter how the data evolves in the future. If such a strategy exists, a fraudulent 'expert' can make this kind of forecast confident that he will pass the test even though he has no real knowledge of the subject.

A good example of a manipulable test is the calibration test: it requires that the empirical frequency of an event to be close to p in the days that the event in question was forecasted to occur with probability p. However, the calibration test is just one example of a manipulable test. Preliminary results suggest that many empirical tests used in practice are manipulable. This shows that it is difficult to dismiss fraudulent, but strategic, experts. In addition, the project also constructs novel empirical tests that true experts will pass and that cannot be manipulated by fraudulent experts. In the process, the project delivers a better understanding of how data and theory must comply with each other. Moreover, the project also delivers new insights on a series of basic concepts in science such as the need for classification, falsification, the use of priors, and the use of paradigms.

The intellectual merit of the project arises from the innovative ways in which strategies for manipulating empirical tests are employed and also from the development of novel empirical tests that are immune to manipulation. The project may have a broad impact as it shows how ideas in economics and game theory can be combined with concepts in probability and statistics to deliver a formal analysis of several basic concepts in science. This leads to a better understanding of these fundamental concepts and also to novels ways to test theories.

Agency
National Science Foundation (NSF)
Institute
Division of Social and Economic Sciences (SES)
Application #
0820472
Program Officer
Nancy A. Lutz
Project Start
Project End
Budget Start
2008-08-01
Budget End
2010-12-31
Support Year
Fiscal Year
2008
Total Cost
$212,675
Indirect Cost
Name
University of Pennsylvania
Department
Type
DUNS #
City
Philadelphia
State
PA
Country
United States
Zip Code
19104