Explosive ordnance disposal is among the more hazardous occupations whether in the military and law enforcement when confronted with improvised explosive devices (IEDs), or after natural disasters when exposed explosive materials must be quickly, safely, and expeditiously disposed. Explosive ordnance disposal (EOD) technicians respond to dangerous situations, wearing only protective suits, which may reduce the impact of a blast but do not provide complete protection. Moreover, EOD technicians are subject to fatigue, heat stress, and reduced dexterity. These can lead to a reduced ability by EOD technicians to deal with threats effectively and safely.
Robots can greatly mitigate many of the risks, help reduce a technician's exposure and time-on-target, lessen the risk of otherwise very dangerous situations, and keep the technician better out of harm's way. Technician are still need albeit from afar by teleoperating the robots. A chronic problem in EOD is the limited perceptual information that such robotic systems can convey. Currently available robots depend heavily on vision and video telemetry, which can be drastically impaired when IEDs are buried or concealed.
This research addresses this scientific challenge by developing a multi-modal perceptual image comprised of tactile, optical, and force information, using a telerobot. Such a telerobot will: 1) deliver a multimodal image in a format that is easily interpretable by the operator; 2) learn from limited observations using principles of machine learning; 3) recognize objects from texture and weight signatures; and 4) recommend best strategies to approach and explore the objects. In addition to its main application, the theories and technologies involved in this proposal impact other fields in which dexterity and tactile feedback is key for successful task completion, such as tele-surgery. A unique feature of this project is a set of bimanual tool tips, equipped with multi-sensory devices for collecting tactile, force and chemical composition information for target characterization and action. As the exploration takes place, object profiles (based on sensed information) are built from discrete observations, which are accumulated and fitted to machine learning models based on transfer learning and zero/one-shot learning.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.