As robots branch out into unstructured and dynamic human environments (such as homes, offices, and hospitals), they require a new design methodology. These robots need to be safe to operate next to humans; they are expected to handle frequent changes and uncertainties that are inherent in human environments; and they should be as inexpensive as possible to enable wide-spread dissemination. Such criteria have lead to the emergence of compliant/soft robots, 3D printed robots, and inexpensive consumer-grade hardware, all of which constitute a major shift from heavy and rigid robots with tight tolerances used in industry. Traditional measurement devices that are suitable for sensing and controlling the motion of the rigid robots, i.e. joint encoders, are incompatible or impractical for many of these new types of robot. Alternative approaches that do not rely on encoders are largely missing from robotics technology and must be developed for these novel design models. This project investigates ways of using only cameras for sensing and controlling the robot's motion. Vision-based algorithms for robotic walking, object grasping and manipulation will be derived. Such algorithms will not only enable the use of the new-wave robots in unstructured environments but will also significantly lower the cost of traditional robotic systems, and therefore, boost their dissemination for industry and educational purposes.

The project will focus on utilizing vision-based estimation schemes and learning methods for acquiring both robot configuration information and task models within a framework where modeling inaccuracies and environment uncertainties are dealt with by robust visual servoing approaches. Visual observations will be used to model the relationship between actuator inputs, manipulator configuration, and task states, and they will be combined with adaptive vision-based control schemes that are robust to modeling uncertainties and disturbances. The framework will fundamentally rely on using convolutional neural networks (CNNs) to build the models from observation alone, both for a low-dimensional representation of configuration and for an image segmentation of the manipulator. Reinforcement learning methods will also be applied to assess the practicality of a modular combination of such methods with the offline learned representations to perform complex positioning and control tasks. These approaches will be evaluated in the context of within-hand manipulation, compliant surgical tool control, locomotion of a 3D-printed multi-legged robot, and force-controlled grasping and peg-insertion using a soft continuum manipulator. The contributions of our proposed work are that no prior model of a robot's configuration is needed because it is explicitly observed and inferred up-front (system identification); uncertainty affecting task performance is addressed by adapting the robot dynamics on-the-fly (model-through-confirmation); and the broad applicability of our methods will be demonstrated through application to a wide variety of platforms. Work done on this project will help to enable lower cost robotic and mechatronic hardware across a range of domains and will particularly impact the ability to control compliant and under-actuated structures.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

Project Start
Project End
Budget Start
2019-08-15
Budget End
2022-07-31
Support Year
Fiscal Year
2019
Total Cost
$424,911
Indirect Cost
Name
Johns Hopkins University
Department
Type
DUNS #
City
Baltimore
State
MD
Country
United States
Zip Code
21218