Wilderness search and rescue (WiSAR) is the task of finding and giving assistance to humans who are lost or injured in mountain, desert, lake, river, or other remote settings. Rapid coverage of large search areas and difficult terrain is critical; as the search radius increases, the probability of finding and successfully aiding the missing person decreases. Prior NSF-sponsored research established the hypothesis that mini (2-8 foot wing span), fixed-wing Unmanned Aerial Vehicles (UAVs) equipped with video cameras can support WiSAR personnel. In this project the PIs plan to extend that work, by addressing key human factors and technology obstacles that must be overcome to make such support practical and efficient. Through enhanced WiSAR-oriented UAV operator interfaces and visualization, the PIs will improve both the UAV's coverage of the search area by people without piloting skills and the searcher's detection of the missing person or other signs in the video. The design of these WiSAR systems will integrate the PIs' expertise in human-robot interaction, computer vision, visualization, and artificial intelligence. The project adopts a human-centered evaluation approach that uses both laboratory and field tests. Innovative aspects of the work include integration of video mapping, missing-person modeling, and human interaction to create prioritized search maps that are dynamically updated as information is acquired, and which can then be used to perform search planning based on this dynamic information, requiring real-time planning algorithms that can adapt to uncertainty and new information. The prioritized search maps and resulting plans can be used directly by WiSAR personnel, or integrated into the UAV operator?s interface, or allow WiSAR personnel to outline a plan and grant the UAV sufficient autonomy to optimize coverage subject to this outline. The PIs will also integrate multiple video sources (including infrared imaging), anomaly detection and tracking, and geo-registered video annotation in order to further assist WiSAR personnel in detecting, identifying, and communicating information about objects of potential interest in the video sources. These will be presented to WiSAR personnel in the form of interfaces designed for their separate roles: UAV operator, video searcher, incident commander, and field searcher, as well as integrated interfaces for individuals acting in multiple roles simultaneously. Finally, the PIs will extend this work to support video-equipped manned aircraft and ground search teams.
Broader Impacts: Each year, many people are lost or find themselves in jeopardy while hiking, boating/kayaking, skiing, fishing, etc. WiSAR consumes thousands of man-hours and hundreds of thousands of dollars each year in Utah alone. Creating appropriate visualization algorithms and user interfaces to support planning, visualization, and UAV control should decrease the amount of time required to locate and offer assistance to missing persons, increasing the likelihood of successful rescue. The PIs plan to include in this project undergraduate students from nearby Utah Valley State College (UVSC), a school that does not yet have a graduate program and consequently has few opportunities for socially relevant, interdisciplinary undergraduate research.
As unmanned aerial vehicles (UAVs) become smaller, more portable, and less expensive, they are beginning to be used for a variety of purposes. This project has focused on ways to use camera-equipped UAVs to assist personnel conducting search-and-rescue operations in remote wilderness areas. Basic UAV deployment can be as simple as the plane, an onboard video camera, a video transmitter to the ground, a video antenna receiver, a display, and command/control transmission both ways. But just being able to fly the plane and watch the transmitted video doesn't always make for effective searching. Through field trials over eight years (seven of which were funded by the NSF), this line of work has identified problems users have efficiently working with these systems and sought to develop more usable systems. The human operators, not the technology, are at the center of the project. This project has made contributions three main areas: Enhanced imagery for video-based searchers, More efficient and intuitive control of the UAV and its search paths, and Data-driven and probabilistic modeling of terrain, missing person behavior, and searchers. Our methods for video analysis and enhancement are designed to make it easier for video-based searchers to identify the missing person or other items of interest. These methods include dynamically stitching together multiple video frames to create views with larger field of view and allow for more persistent viewing, automatically and adaptively identifying objects that look "out of place" in the setting, fusing together visible-spectrum and infrared video, providing geographically-based indexing into the search video, and aligning and synchronizing low-resolution video with higher-resolution still photographs. To allow UAV-search operators to more easily and effectively control the aircraft, we have developed more intuitive piloting interfaces, including showing the UAV's position and planned path in 3-D using terrain maps and pre-acquired satellite imagery. To allow operators to better assess the quality and coverage of a search flight or series of flights, we have also developed coverage quality maps as a way to see not only what was seen but how well it was seen. To further assist planning for UAV-based search deployment, we have also developed models for missing-person behavior. These methods incorporate properties of the terrain and vegetation, input from the incident commander or other experienced searchers, and data gathered on how missing persons move--and where they are eventually found--to prioritize areas for searching. These can then be combined with optimization methods to suggest search paths for the UAV to fly. Users can also loosely say "fly over here for this long", and the system automatically guides the UAV in an optimized search of that area given the time allotted. The results of this work have been disseminated in peer-reviewed academic papers, presentations at academic conferences, presentations at conferences for search-and-rescue personnel, presentations at universities and to other groups, and through articles in online, print, and broadcast media. The work has involved collaborators from Utah County Search and Rescue as well as colleagues from Utah Valley University and George Mason University. We are in the process of converting our various systems for open source distribution to the growing UAV hobbyist community. Our hope has always been that someday these ideas will find their ways into systems that help searchers save lives.