This research examines expressions of uncertainty in the form of interval estimates. For example, a realtor might estimate the market value of a home to be between $500,000 and $525,000, and more (less) uncertainty in the value would presumably result in a wider (narrower) interval. Previous research has shown that people tend to be overconfident when reporting confidence intervals, or CIs (e.g., "I'm 90% confident that the population of San Diego is between 1 million and 1.5 million"). That is, X% CIs tend to be much too narrow and contain the true value much less than X% of the time. This can matter a great deal. For example, a leading U.S. manufacturer elicited a projected range of sales from its marketing staff in order to plan the production capacity of a new factory. The range turned out to be too narrow and the new factory was incapable of meeting the unexpected demand.

We examine both how people produce interval estimates and the closely related question of how people evaluate them. In evaluation tasks, evaluators are presented with interval estimates reported by multiple producers to the same question, told the true value, and asked which estimate is best. People tend to trade off an interval's accuracy (how close its midpoint is to the true value) and its informativeness (how narrow it is). For example, evaluators sometimes find intervals that do not contain the true value to be superior to those that do -- but only when the former are narrow, or highly informative. We propose a formal model to account for such behavior. Assuming that producers' subjective probability distributions are normal, our model calculates for each interval the subjective probability density at the true value. The interval with the highest density at the true value is judged superior. The only free parameter in the model is how much confidence the producer is assumed to have in the reported interval (e.g., whether the evaluator treats it as a 50% CI or a 90% CI). Preliminary results indicate that the new model outperforms competing models.

Because we view interval production and evaluation as two sides of the same coin -- how intervals are evaluated may shape how they are produced -- the results from the proposed evaluation experiments will guide research on production. In addition, we are conducting a complementary line of research on production. Our starting point is the fact that producers are largely insensitive to explicit probabilities manipulated between subjects, but they are sensitive to the probabilities manipulated within subjects. Our experiments will reveal subjects' expectations about their hit rate (e.g., do subjects even expect their 90% CIs to contain the true value more often than their 50% CIs?), and the nature of their within-subject sensitivity to explicit probabilities (e.g., are producers responding to the explicit probabilities, or are they merely widening and narrowing their intervals in the direction implied by the numbers?). These experiments will answer basic questions about interval production that have both applied and theoretical implications.

Agency
National Science Foundation (NSF)
Institute
Division of Social and Economic Sciences (SES)
Application #
0551225
Program Officer
Jacqueline R. Meszaros
Project Start
Project End
Budget Start
2006-05-01
Budget End
2010-04-30
Support Year
Fiscal Year
2005
Total Cost
$250,000
Indirect Cost
Name
University of California San Diego
Department
Type
DUNS #
City
La Jolla
State
CA
Country
United States
Zip Code
92093