Future unmanned vehicle (UV) systems will be deployed for homeland security missions including Chemical, Biological, Nuclear, Radiological, and Explosive (CBRNE) event response. UV systems that incorporate large numbers of ground and aerial UVs [67] (large, mixed-type UV systems) are envisioned. This project will develop visualization techniques that provide a scalable interface incorporating integrated and easily understood information, thus permitting supervision of large, mixed-type UV systems. This project includes three Human-Robotic Interaction (HRI) challenge areas: the development of a data abstraction framework; the development of scalable interface visualization techniques; and the development of visualization transition techniques. During the first four years, the PI will develop and evaluate data abstractions, visualization, and transition techniques. The fifth year focuses on CBRNE field evaluations. A new module for the PI's Complex Human-Machine Interaction course and a new Introduction to Robotics course will be developed. Each summer an undergraduate student and a high school teacher will join the research team. The intellectual merits of this project include: formulation of a data abstraction framework for providing scalable, integrated visualizations; creation and evaluation of visualization and visualization transition techniques; and validation of all hypotheses via quantitative and qualitative usability evaluations with simulated and real UVs. Broader impacts: The development of visualization techniques for large, mixed-type UV systems that allow emergency responders to quickly assess a situation while reducing exposure to contaminants. The development of HRI techniques that will impact UV system development for homeland security that increase personnel capabilities, reduce exposure to dangerous situations, and cover difficult terrain. The development of visualizations may influence interface design for complex system domains such as air traffic control and nuclear process monitoring. The inclusion of high school teachers in summer research projects encourages inclusion of research examples in their courses and may result in increasing student interest in engineering.
The research objective was to develop methods for humans to interact with large numbers of robots via a scalable interface incorporating information from all robots so that humans can understand what the robots are doing and provide guidance to the robots. Monitoring and directing robots that are located out of the humans’ line-of-sight limits understanding of the information gathered by the robots’ when displayed on a desktop computer, tablet or smartphone. The research domain was first responders (e.g., fire, police personnel) responding to a chemical, biological, radiological, nuclear or explosive device events (e.g., 9-11 terrorist attack). The response structure incorporates a hierarchy of responders with differing responsibilities. The incident commander oversees the entire response and provides direction to the lower level responders (e.g., police cordon off streets). A framework representing the large amounts of data and information provided by the robots in a format appropriate for the different users (e.g., incident commander, fire personnel) was developed. The Cognitive Information Flow Analysis combines results from analysis techniques into a representation of how information is passed from the lowest level users to the highest level and how information is combined and changed by users at the different levels. This analysis is crucial when robots are tasked with providing the same information as human responders. Eight response tasks incorporating robots were identified. The novel General Visualization and Abstraction algorithm dynamically minimizes visual screen clutter by reducing the sizes of icons representing information based on age or relevance to current tasks. The algorithm groups icons representing the same types of information (e.g., victims with similar injuries), further minimizing clutter. The algorithm supports multiple users by displaying the user’s information based on the algorithm and others’ information in the reduced visualization, thus making the most task relevant information salient. The results demonstrate that the algorithm lowers users’ cognitive demands and improves overall situation awareness. A limitation is that the grouping is information type specific, representing one element of a semantic context. Another limitation is that information grouping does not consider temporal and spatial contexts. Finally, the algorithm does not reduce the visual clutter sufficiently for hand-held devices. The novel Feature Sets visualization method reduces visual clutter based on geospatial, temporal and semantic contexts. Feature sets combine related information into a single chronologically ordered component, while providing access to information detail. Feature Sets present dynamic changes more readily than Points of Interest (e.g., location pins on Google maps), thus allowing quick identification of new information. The results demonstrate that Feature Sets provide better task performance than Points of Interest with high information densities and allow faster identification of dynamically updated information. The initial desktop interface allows task specification, interaction with robots, access to robot provided information, etc. The associated tablet interface supports basic voice interaction. The current interface incorporates hardware and operating system independent components usable across desktops, tablets and smart phones. This focus requires effective interaction with large amounts of information on smaller displays, such as specifying and selecting information on hand-held devices. LocalSwipes, a multi-touch interaction method for hand-held devices provides standard widgets (e.g., buttons, drop down boxes) without requiring widgets to be oversized for direct touch interaction. LocalSwipes performs better than direct touch for specifying robot tasks during seated or walking task specifications. A study incorporating LocalSwipes, users walking continuously, and users receiving robot tasking information via a visual display, auditory, or combined visual and auditory modalities found worse performance with the auditory specification, but walking speed was unaffected by the task specification modality. Hand-held devices rely on direct touch interaction; however, small target selections are difficult when walking. Larger targets require more screen space and limit the amount of displayable information. Cursor selection methods combine a cursor with movement of the cursor or the background display to place the cursor for a target selection. These methods were compared to direct touch selections and the cursor selection methods are significantly more accurate when selecting 1mm, 2mm and 4mm targets than direct touch. The research results are applicable to robotic systems deployed in other domains; displaying and managing very large amounts of information on different sized computing devices; and applications requiring reduced visual clutter and interaction on hand-held devices. The outreach activities involved the general public, primarily pre-K through undergraduate students and directly involved seven undergraduate students and two high school students in research. A newly developed Introduction to Mobile Robotics course allows students to become competitive for industrial robotics careers.