Multimodal human-computer interaction has emerged as a major research topic during the past decade. This trend can be expected to continue, as designers struggle to address challenges such as data overload, the need for increased expressive power of modern technologies, and support for time sharing and attention management in a variety of complex real-world domains. To ensure the robustness of multimodal interface designs, it will be critical to consider recent findings from behavioral and neuro-physiological research, which suggest the existence of considerable crossmodal constraints on attention in the form of modality expectations, modality shifting, and spatial and temporal crossmodal links. Despite their potential for leading to performance breakdowns in human-computer interaction with potentially catastrophic consequences, these constraints have received little attention in guidelines for, and the design of, multimodal interfaces. In an effort to address this shortcoming, the PI's main goals in this project are: (a) to determine whether performance effects of crossmodal links in attention that were reported in laboratory research scale up to complex real-world domains; (b) to determine whether tactile and peripheral visual cues, which were not considered in most earlier research on multimodal information processing, but which are increasingly included in multimodal interface design, are affected by crossmodal constraints to the same extent as foveal visual and auditory system output; and (c) to establish the effectiveness of an adaptive approach to multimodal information presentation for exploiting the performance benefits, and eliminating the performance costs, of crossmodal interactions. The particular application domain for this research will be Air Traffic Control (ATC), a complex data-rich environment where difficulties with processing multimodal information are contributing to workload bottlenecks, breakdowns in situation awareness, and operational errors. The number of concurrent attentional demands in this domain continues to increase with the introduction of advanced automation technologies. Faced with these challenges, air traffic managers at the Cleveland En-Route Traffic Control Center have indicated a strong interest in collaborating with the PI on this research. They will conduct a contextual inquiry and focus groups with controllers, and allow controllers to participate in experiments, whose purpose is to inform the design and iterative refinement of a multimodal ATC interface that includes visual, auditory, and tactile cues, where parameters such as the timing, salience, and location of cues can be adjusted dynamically in an attempt to exploit the benefits and avoid the performance costs of crossmodal links in attention. The investigation will take place in the first stage within the context of a medium-fidelity ATC simulation in the PI's laboratory, and will later move to the dynamic ATC simulation facility at Cleveland Center.
Broader Impacts: The findings from this research will advance our knowledge and understanding of multimodal information processing and, more specifically, the nature and extent of crossmodal links between vision, audition, and touch. The results will also inform the design of future multimodal displays for data-rich event-driven domains such as aviation, medicine, or process control, and thus promote the safety of operations in these workplaces and benefit society through the transfer and application of basic research to applied system design.