This project will study algorithms for finding and following trails, both in the context of autonomous mobile robots and assistive devices that may be mounted on vehicles or carried by people. Paths along the ground are ubiquitous features of man-made and natural outdoor environments, "showing the way" to those who can recognize them and ``smoothing the way'' to ease passage. These two functions place each path along a spectrum of distinctiveness and traversability, which bear on the difficulty of the perceptual and control tasks, respectively, that following it poses. Trails occupy the more tenuous ends of both axes, comprising dirt and other unimproved roads as well as true hiking trails. A unique characteristic exhibited by some trails is discontinuity, in which visual markers such as cairns, blazes, footprints, and other "tells" indicate a sequence of waypoints. The research will focus on computer vision and robotic problems stemming from three core trail following tasks: (1) keeping, or discriminating and staying on continuous and discontinuous trails; (2) negotiation, or avoiding within-trail obstacles and setting control policies appropriate to changing terrain conditions; and (3) finding trails and mapping unknown trail networks, including detecting branches, dead-ends, and discontinuities. Using stereo color cameras, GPS, static aerial imagery, and topographical data, the PI will investigate (1) Texture-based methods for robust segmentation to incorporate rich models of natural image statistics, (2) On-line visual tracking and activity analysis of other mobile agents for efficiently learning control policies, and (3) Integration of directed search, recognition, and footprint structure estimation algorithms for discontinuous trails. The benefits of robust trail following skills will extend to wheeled, walking, and low-and-slow-flying robots, with applications including resupply of difficult-to-reach camps and research stations, inspection and maintenance of trails, and patrolling and reconnaissance operations as part of a border security or military force. Assistive applications include augmenting driver awareness on dangerous roads, guiding for visually-impaired hikers, and as a smart device for wildlife study through animal tracking and search-and-rescue efforts through person tracking. Educational impacts will derive from extensive involvement in this work by students from the graduate level down through high school. The PI will start an undergraduate team to compete in national robot competitions in order to encourage participation in vision and robotics research, run a program of summer internships for high school and undergraduate students to help program and test aspects of the trail following system, and introduce a novel web-based system to allow a wider group of students to contribute to the research through image segmentation and video annotation to provide data for robot learning.
URL: http://vision.cis.udel.edu/trails
This project's overarching goal has been to visually find "trails"---linear, navigationally-useful features whether man-made or natural, such as hiking paths, roads, and rivers. It began with the development of an algorithm to search images using a top-down approach in which feasible trail shapes were hypothesized in image space and ranked by a likelihood function based on color region contrast with no a priori information about the trail material or appearance. The basic approach was validated with robust performance on a wide variety of trail scenes using a dataset of monocular, uncalibrated color images. The algorithm was optimized for speed and extended to successfully track trails over uncalibrated image sequences, both from ground and low-flying aerial vehicles. To demonstrate the utility of the approach for autonomous robot navigation, an omnidirectional camera was mounted on a Segway RMP 400 platform called "Warthog," and the trail hypothesis space was converted to vehicle coordinates after intrinsic and extrinsic camera calibration. This enabled the estimated trail parameters to be used for motion planning and real-time control of the robot, leading to successful "trail following." One of the broader impacts of the project involved entering Warthog in a student-centered international contest called the Intelligent Ground Vehicle Competition (IGVC). A major part of IGVC was a path-following challenge in which a twisty course was laid out on grass with painted parallel lines, and ostacles such as barrels and sawhorses were scattered along the course *inside* the path. We generalized our trail-following algorithm by (a) modifying the trail likelihood function to use edges instead of regions, and (b) augmenting the motion planner to avoid ladar-detected obstacles within the trail region. Over our five years at IGVC we were very successful at this challenge and won it several times. Lessons learned at IGVC proved useful for following hiking trails. However, in our testing we sometimes encountered areas where the trail's color contrast was not very strong. This led to a new phase in which a variety of different trail likelihood functions were developed and integrated with color contrast using linear weighting. One was based on the ladar obstacle map, since the edges of the trail were often delineated by obstacles; and another on a heightmap derived from stereo depth. These helped the robot "see" the trail clearly in more different visual situations. Furthermore, a measure of confidence in the trail estimate was used to sometimes slow the robot down so it had more time to search for the trail, and even to stop the robot so that a 3-D scan could be performed by the ladar to reacquire the trail from a detailed heightmap. All of these improvements continued to increase the mean distance the robot could travel autonomously between human interventions, and by the end of the project's term Warthog could very capably go around a multi-km hiking circuit through a variety of terrains, crossing bridges and handling forks in the path, with only occasional issues where delicate maneuvering over tree roots was required. Additional features investigated in the project's final year included the use of Kinect depth cameras to allow Warthog to "see in the dark" for night-time trail following, and algorithms to recognize and avoid specific hazards such as thin tree trunks and low rocks. With the capabilities of the trail-following system fairly mature at the close of this project, we are poised to explore many more applications enabled by this technology. One such application is in the field of forest health monitoring. Here Warthog can carry additional scientific sensors as it follows trails in order to carry out wildlife censuses; measure soil, sunlight, and atmospheric variables; and also deploy a small UAV to make observations in the canopy. Further demonstrating its generality, we are also using this technology in the head and "brains" of a humanoid robot which must drive a vehicle along a road as part of a disaster response scenario.