This research project consists of controlled experiments that explore basic processes of human coordination via computer-mediated communication in time-critical situations. Partners will collaborate on a set of screen-based tasks, and the effects of shared eyegaze will be examined with and without a bi-directional voice channel and other forms of pointing such as with a mouse. The effects of these different communication modes will be compared using the grounding framework in order to discover the independent and combined contributions of speech, gaze, and other pointing methods upon interpersonal coordination on a fine timescale (100 milliseconds to seconds). The effectiveness of different screen representations of eyegaze will also be compared in different situations. Measures include performance, speed, choice of strategies, contingent behaviors, distribution of effort and initiative between partners, and learning. Tasks include searching together to locate a target (as police officers searching for a sniper or a team of ornithologists searching for a bird among trees), tracking moving targets (as security personnel tracking a suspect in a crowd within an auditorium or a pair of marine biologists tracking a swimming animal), and establishing consensus using referential communication (as doctors referring to ambiguous or hard-to-describe patterns in medical images or programmers helping each other debug software).

Much of what is known about how people accomplish searching, monitoring, and problem-solving tasks has focused on people acting alone. But in reality, people often collaborate on such tasks and need to coordinate their behavior with others. More and more often, collaboration takes place between two people at a distance, electronically mediated. In this project, a psychologist who studies visual attention, a psycholinguist who studies communication, a computer scientist who studies graphics and object recognition, and their students examine the use of shared eyegaze, or two people's ability to see where each other is looking, moment by moment. The investigators have developed the first system in which the output from two lightweight head-mounted eyetrackers is not only synchronized with speech for analysis as experimental data, but is also transmitted as a continuous stream of interpersonal cues for communication. The eyegaze of each partner is displayed on the screen of the other in the form of a gaze cursor, so that that each can tell where the other is looking, as well as whether they are both fixating the same object (mutual gaze). This project builds on the investigators' previous work on how awareness of where a partner is looking improves performance in collaborative tasks.

Understanding human coordination on a fine-grained scale is vital for supporting collaboration among people located at a distance. Spoken communication by itself can create bottlenecks for some timecritical applications; therefore, the ability of collaborators to share visual context and precise representations of what they are attending to is expected to be helpful with time-critical tasks.

Eyetracking is a promising technology for supporting collaboration. Since its inception as a tool for research, the technology has become more precise, less cumbersome, cheaper, and easier to use. With further advances, eyetracking could someday join the ranks of ubiquitous input devices like the mouse or touch screen. This research will provide scientific foundations for the use of eyetracking in computer-mediated collaboration between remotely located people.

This project will provide information on how best to deploy (and just as important, when not to deploy) shared gaze technology in mediated communication. A better understanding of the instrumental and intentional ways in which people collaborating together on tasks gaze at objects may also benefit those who cannot speak or move their hands and who might prefer to use eyetracking as a means of communicating with people or computers.

The project affords many opportunities in the interdisciplinary research and training of graduate students and undergraduates at Stony Brook, home to a racially, ethnically, and economically diverse population of talented students. This training will integrate visual attention, social dynamics, language processing and computer graphics, topics that are often pursued independently of one another.

Agency
National Science Foundation (NSF)
Institute
Division of Information and Intelligent Systems (IIS)
Type
Standard Grant (Standard)
Application #
0527585
Program Officer
William Bainbridge
Project Start
Project End
Budget Start
2005-09-01
Budget End
2010-08-31
Support Year
Fiscal Year
2005
Total Cost
$742,006
Indirect Cost
Name
State University New York Stony Brook
Department
Type
DUNS #
City
Stony Brook
State
NY
Country
United States
Zip Code
11794