This planning project will develop a probabilistic model for evaluating results from peer reviewed competitions and identify competitions with which to test it. Standard test-retest replications of peer review competitions have limited utility in guiding those responsible for the design and conduct of such competitions towards cost-effective measures to enhance their reliability. This project uses a probabilistic model, building on a simple but elegant approach first presented by Stinchcombe and Ofshe. The model assumes knowledge of the `true` distribution of a set of applications with respect to merit and then tests for the probabilities of errors (differences between the true and the judged distributions) under varying circumstances. This planning grant will allow the investigator to complete the research needed to prepare a paper explaining the model. He will also identify agencies with competitions that would provide suitable data with key parameters (inter-judge correlation levels and variations in judges' rating standards) that can be used to test the model. He will ask program officers for the terms under which they could contribute their data to a larger set that can be used for this purpose. NSF will receive an overview report specifying the range of competitions from which data could be obtained and the coding format that could make it accessible and useful to other scholars pursing peer review issues.

Agency
National Science Foundation (NSF)
Institute
Division of Social and Economic Sciences (SES)
Type
Standard Grant (Standard)
Application #
9711064
Program Officer
Rachelle D. Hollander
Project Start
Project End
Budget Start
1997-08-01
Budget End
1998-01-31
Support Year
Fiscal Year
1997
Total Cost
$14,980
Indirect Cost
Name
New York Academy of Sciences
Department
Type
DUNS #
City
New York
State
NY
Country
United States
Zip Code
10007