Collecting survey data of national importance (for example, on employment, health, and public opinion trends) is becoming more difficult as communication technologies undergo rapid and radical change. Important basic questions about whether and how to adapt data collection methods urgently need to be addressed. This project investigates how survey participation, completion, data quality, and respondent satisfaction are affected when respondents answer survey questions via mobile phones with multimedia capabilities (e.g., iPhones and other "app phones"), which allow alternative modes for answering (voice, text) and can allow respondents to answer questions in a different mode than the one in which they were invited. Two experiments will compare participation, completion, data quality, and satisfaction when the interviewing agent is a live human or a computer and when the medium of communication is voice or text, resulting in four modes: human-voice interviews, human-text interviews, automated-voice interviews, and automated-text interviews. The first experiment randomly assigns respondents to one of these modes; the second experiment allows respondents to choose the mode in which they answer. Results will shed light on whether respondents using these devices agree to participate and answer differently to human and computer-based interviewing agents, and whether this differs for more and less sensitive questions. Results also will shed light on how the effort required to interact with a particular medium (e.g., more effort to enter text than to speak) affects respondents' behavior and experience, and whether the physical environment that respondents are in (a noisy environment, a non-private environment, a brightly lit environment with glare that makes reading a screen difficult) affects their mode choice and the quality of their data. Finally, the results will clarify how allowing respondents to choose their mode of response affects response rates and data quality.

These studies are designed to benefit researchers, survey respondents, and society more broadly. For researchers, the benefit is to allow them to adapt to the mobile revolution as they collect data that are essential for the functioning of modern societies, maintaining high levels of contact and participation while gathering reliable and useful data. For survey respondents, the potential benefit is the design of systems that make it more convenient and pleasant to respond and that enable them to choose ways of responding appropriate to their interactive style, the subject matter, and their physical environment. For society more broadly, it is essential that the survey enterprise is able to continue to gather crucial information that is reliable and does not place undue burden on citizens as their use of communication technology changes and as alternate sources of digital data about people proliferate. More fundamentally, the results will add to basic understanding of how human communication is evolving as people have expanded ability to communicate anytime, anywhere, and in a variety of ways. The project is supported by the Methodology, Measurement, and Statistics Program and a consortium of federal statistical agencies as part of a joint activity to support research on survey and statistical methodology.

Project Report

As people increasingly communicate via asynchronous non-spoken modes on mobile devices, particularly text messaging (SMS), longstanding assumptions and practices of social measurement via telephone survey interviewing are being challenged. The funding for the current project supported two experiments that examined data quality in text and voice interviews on smartphones (iPhones) administered by human and automated interviewers, resulting in four interview modes: Human Voice, Human Text, Automated Voice, and Automated Text. In the first experiment, 634 people who had agreed to participate in an interview on their iPhone were randomly assigned to answer 32 questions from US social surveys via text messaging or speech, administered either by a human interviewer or by an automated interviewing system. 10 interviewers from the University of Michigan Survey Research Center administered the voice and text interviews; automated systems launched parallel text and voice interviews at the same time as the human interviews were launched. Quality of answers was measured by (1) precision of numerical answers (how many were not rounded—rounded answers presumably involve less thoughtful responding), (2) differentiation of answers to multiple questions with the same response scale (the same answer for all questions likely indicates a lack of thoughtfulness), and (3) disclosure of socially undesirable (more embarrassing or compromising information, which can be assumed to be more truthful). The results showed that text interviews led to higher quality data—more precise and differentiated answers and more disclosure of sensitive information—than voice interviews, both with human and automated interviewers. Independent of this, respondents also disclosed more sensitive information to automated (voice and text) interviewers. Text respondents reported a strong preference for future interviews by text. The second experiment examined the impact on the quality of answers of choosing one's mode of interviewing on a single device. In the experiment, an additional 626 iPhone users were contacted in the same four modes and required to choose their interview mode (which could be the contact mode). Overall, more than half the respondents chose to switch modes, most often into a text mode. The findings demonstrate that just being able to choose (whether switching or not) improved data quality: when respondents chose the interview mode, responses were more precise and differentiated than when the mode was assigned (Experiment 1), and there was no less disclosure. Those who began the interview in a mode they chose were more likely to complete it than respondents interviewed in an assigned mode; the only evident cost for mode choice was a small loss of invited participants at the point the choice was made (mostly in switches from automated to human interviews). Finally, participants who chose their interview mode were more satisfied with the experience than those who were interviewed in an assigned mode. The findings suggest that people interviewed on mobile devices at a time and place that is convenient for them, even when they are multitasking, can give more trustworthy and accurate answers than those in more traditional spoken interviews. The results also suggest that allowing respondents to choose their interviewing mode on a single device can lead to improved data quality and increased satisfaction. Many respondents reported that responding via text is particularly convenient because they can continue with other activities while responding; convenience was respondents' most frequent explanation for why they chose the interviewing modes they did (whatever their choice). Additional contributions of this funding included the development and use of new software systems and procedures for interviewing: (a) a multimodal (text vs. voice, human vs. automated) case management system that delivered questions and captured answers across the four interviewing modes, and that in some cases allowed respondents to choose their mode of interviewing irrespective of the mode in which they were invited; (b) an interviewing interface that allowed interviewers to conduct both text and voice interviews (displaying questions and entering answers) in the same system; (c) an automated speech dialog system for delivering audio-recorded questions and probes, and recognizing respondents' spoken answers; and (d) an automated text dialog system for delivering SMS questions and capturing texted responses. The activities funded under this project are important because they explore plausible ways of adapting to increasing challenges to survey data collection (growing unwillingness to participate) that have come with people's changing communication practices. As people regularly communicate via different modes (e.g., voice, text, email, web browser, social media) on their smartphones, they are likely to expect to be able to respond to surveys through the modes of communication they routinely use. It is crucial for researchers who measure public opinion and behavior—particularly through surveys that form the basis for important policy decisions—to be able to design surveys that people find convenient to participate in and that elicit high quality answers.

Agency
National Science Foundation (NSF)
Institute
Division of Social and Economic Sciences (SES)
Application #
1026225
Program Officer
Cheryl Eavey
Project Start
Project End
Budget Start
2010-10-01
Budget End
2014-09-30
Support Year
Fiscal Year
2010
Total Cost
$705,410
Indirect Cost
Name
Regents of the University of Michigan - Ann Arbor
Department
Type
DUNS #
City
Ann Arbor
State
MI
Country
United States
Zip Code
48109