This study of the function of deception in electronic communications is designed to develop new theories and tools that will significantly improve collaboration in virtual organizations of all types. Virtual organizations - aggregations of individuals, facilities and resources that span geographic and institutional boundaries - are having transformative effects on the ways in which people socialize and collaborate. They enable interaction between individuals with diverse perspectives who might not otherwise work together, the sharing of expensive and scarce resources, and novel ways of accomplishing tasks and solving problems. Despite the unique capabilities that virtual organizations provide to distributed groups, many have faced difficulties in working together at a distance. While virtual organizations provide basic communication tools (e.g., instant messaging or video conferencing), these tools lack support for the subtlety and nuance of initiating and exiting conversations. In particular, systems fail to support the narrative accounts that people use to explain their behavior and availability.

The result of this disconnect between social practice and technical implementation is a flood of unwanted interruptions and painstaking decisions about what personal information one is willing to share and with whom. This research addresses this fundamental problem by developing a narrative approach to interpersonal awareness, and by focusing on the role of deception in managing these narratives. Preliminary evidence suggests that one fifth of all lies told in instant messaging are used to initiate or conclude a conversation. These lies represent potentially valuable "hotspots" that signal trouble in one's interpersonal awareness narrative. Focusing on these hotspots, this work addresses 3 issues: 1) How do people use deception to manage their interactions and avoid unwanted interruption? 3) Are there linguistic and sensor-based attributes that indicate deception may be likely? 3) Can this knowledge be used to design and evaluate tools for managing interpersonal awareness narratives and enabling interaction in virtual organizations?

This work builds on substantial research in the area of interpersonal awareness and fostering informal interaction in geographically distributed groups. It makes several unique contributions through a focus on interpersonal awareness narratives: 1) systematically examining the conditions under which deception is used as a resource with existing awareness technologies, 2) conceptualizing deceptions as an indicator of a "hot spot" that can be drawn on in supporting interpersonal awareness, 3) identifying behavioral predictors of deception to design systems that reduce unwanted interruptions without blocking those that are useful or important. By managing attentional and awareness needs more fluidly, members of virtual organizations will be able to coordinate their activity and achieve their tasks more effectively, and the organizations as a whole will be better able to meet their business, educational, social or other goals.

Project Report

A key challenge brought on by the ubiquity of communication technologies and Internet connectivity is that people often feel overwhelmed by interruptions and opportunities to interact with others. The overarching goal of this project was to improve our understanding of and ability to design technologies to support how people use certain features of modern communication technologies, such as text messaging, to manage their availability for interaction with others. One common technique for doing so is deception, which was a focal point of this project. The project had three specific goals: 1) Develop a theoretical framework for understanding how people use deception to manage their attention and availability in text messaging and other forms of mediated communication; 2) Understand how attention management strategies are reflected in the content of text messages and other computer-mediated communication; 3) Understand how various aspects of message or communication context influence attention management strategies and outcomes. This project's most prominent contribution is the introduction and continuing investigation of the "Butler Lie", defined as a small, "white" lie that is used to start, avoid, end, or otherwise manage a conversation or social interaction. Several studies using instant messaging, text messaging and Blackberry messenger investigated how often lies (butler or otherwise) are told in text messaging and other media. Using an innovative dyadic online survey technique we were also the first to show how the same lies were perceived by both the senders and the receivers. Another study investigated how accurate people are at detecting lies in text messaging, and how they decide which messages are deceptive. Using a custom, privacy-sensitive texting application developed as part of the project in combination with a novel database-driven web survey, we also gathered contextual data about people’s text messages (e.g., time, location, etc.) and their relationships with communication partners. Having gathered over 20,000 messages from hundreds of participants around the United States, we then used machine learning and natural language processing technologies to automatically detect deception and availability management strategies in text messages and other forms of communication. Below are some key findings from the studies: - Deception is a common strategy for managing availability, occurring in about 30% of availability management messages compared with 10% overall. -People are not very accurate at detecting deceptive text messages – they are only correct about 20% of the time. -Fewer lies are told in closer relationships. -Most people tell very few lies, but some people are prolific liars. -People tell more butler lies at night and around meal or social activity times. Over the course of the project, several new technologies for collecting and displaying text messages and text messaging behavior were developed. Participants in one study used a custom-made phone app that collected their messages and allowed them to rate their level of deceptiveness. Participants were then directed to a website, called the "Lie-Brary", that organized their messages based on where, when, and to whom they were sent. These technologies allowed both researchers and participants better understand causes and contexts that influence their lying behavior.

Agency
National Science Foundation (NSF)
Institute
Division of Information and Intelligent Systems (IIS)
Type
Standard Grant (Standard)
Application #
0915081
Program Officer
William Bainbridge
Project Start
Project End
Budget Start
2009-08-15
Budget End
2014-07-31
Support Year
Fiscal Year
2009
Total Cost
$500,484
Indirect Cost
Name
Cornell Univ - State: Awds Made Prior May 2010
Department
Type
DUNS #
City
Ithica
State
NY
Country
United States
Zip Code
14850