This project organizes a two-day workshop for survey methodologists and practitioners. The workshop will be held at Duke University in February 2011, and will consist of panels devoted to best practices for proposing, testing, and implementing changes to questionnaires used in longitudinal and repeated cross-sectional surveys. The workshop will feature some fifteen presenters, all of whom are being recruited on the basis of their leadership roles in major longitudinal and cross-sectional surveys and their publishing record in survey methodology topics relevant to the workshop. In total, roughly seventy researchers will participate in the workshop.

Recurring surveys face a unique set of challenges--most notably, the need to balance comparability and continuity over time with any innovations in the questionnaire. The workshop is expressly designed to help clarify the appropriate standards for considering and implementing questionnaire changes in longitudinal studies, thereby informing the decision making of study investigators. The conference is also intended to encourage a broader dialogue between investigators across various major longitudinal studies, and across academic, government, non-profit, and commercial sectors. Moreover, the workshop aims to enhance future social science research: the guidance on measurement decisions it offers investigators will also inform the substantive analyses of, and conclusions reached by, the user community.

The workshop makes significant and substantial broader contributions. Large-scale recurring surveys are costly projects often made possible through taxpayer support, either through government agencies (e.g., National Health Interview Survey, Current Population Survey) or through NSF grant support for "infrastructure" recurring surveys--the American National Election Studies (ANES), the General Social Survey (GSS), and the Panel Survey on Income Dynamics (PSID). These surveys serve large and diverse communities of scholars, policy-makers, and businesses, and it is essential that the data they provide be of top quality. Achieving and maintaining this standard is an ongoing challenge; this conference provides a valuable service by creating a forum in which to discuss the current state-of-the-science in updating recurring surveys and to create guidelines for broad use by those implementing such survey protocols.

Project Report

Researchers often use surveys to understand how people view the world, their own situation, their communities, politics, marketing techniques, or any number of aspects of everyday life. Many of these surveys are repeated over time to gauge how these things change over time. Updating the questionnaires in such data collections is both essential and problematic. Survey technologies change. Respondent expectations change. Contemporary topics, understandings, and vocabularies change. Budgets shrink; the costs of securing responses seems always to rise. Maintaining the quality of the data gathered – their relevance, reliability, and validity – in a context where the intent is to be able to generalize from a sample to the larger population demands that researchers adapt. Yet adapting without losing the ability to make meaningful comparisons across years is a daunting challenge. The key rationale motivating the conference on Questionnaire Issues in Longitudinal and Repeated Cross-Sectional Surveys was the absence of a clear set of standards or guidelines concerning how investigators can best propose, evaluate, and implement questionnaire changes in recurring surveys. The conference took place on the campus of Duke University on February 18, 2011, and the program featured 18 of the most prominent experts on survey research. Over 200 scholars and practitioners from around the country attended the event. The reasons for change and standards for assessing changes were discussed at length, and the insights led to a better understanding of the many factors that affect survey meaning that extend far beyond changes to questionnaires. Experts who work on a variety of surveys discussed how they use experiments and field testing to implement and test changes to their surveys, and recommended several experimental designs and analytical tools to use in the process. A broad recommendation for complete openness and transparency in the process of making survey changes emerged. While there still exists no formal consensus regarding changes to longitudinal and repeated cross-sectional surveys, the conference provided a forum for experts to come together and share their own practices and ideas. The result is that the survey community is much better informed about the practices and procedures being used by experts in the field.

Agency
National Science Foundation (NSF)
Institute
Division of Social and Economic Sciences (SES)
Type
Standard Grant (Standard)
Application #
1110341
Program Officer
Brian D. Humes
Project Start
Project End
Budget Start
2011-01-01
Budget End
2011-12-31
Support Year
Fiscal Year
2011
Total Cost
$38,235
Indirect Cost
Name
Duke University
Department
Type
DUNS #
City
Durham
State
NC
Country
United States
Zip Code
27705