Planning to achieve a goal requires knowledge of objects, actions, preconditions, and consequences. These abstract concepts are at a much higher level than the ''pixel-level'' sensory and motor interfaces between an embodied robot and the continuous world. Our goal is to show how high-level concepts of object and action can be learned autonomously from experience with low-level sensorimotor interaction.

We hypothesize that these concepts are part of a larger package of foundational concepts that can be learned in approximately the following sequence: using motion to discriminate objects from background; detecting tight, reliable control loops to distinguish self from non-self objects; learning preconditions and consequences of actions applied to objects; identifying ''grasp'' actions that temporarily transform a non-self object to a self object; learning actions and effects that are achievable only with such an object (a tool!).

The learning process depends on representing sensorimotor interaction with the world as a stochastic dynamical system. A ''curiosity'' drive rewards improvements in prediction reliability. Evaluation uses a simulated robot child with two arms, stereo vision, and a tray of blocks and other objects. This research will help robots learn their own high-level concepts, and could provide insights into human learning disabilities.

Agency
National Science Foundation (NSF)
Institute
Division of Information and Intelligent Systems (IIS)
Application #
0713150
Program Officer
Todd Leen
Project Start
Project End
Budget Start
2007-09-15
Budget End
2011-08-31
Support Year
Fiscal Year
2007
Total Cost
$449,999
Indirect Cost
Name
University of Texas Austin
Department
Type
DUNS #
City
Austin
State
TX
Country
United States
Zip Code
78712