Discovering the fundamental vision-motion primitives that are used by people to navigate and manipulate objects would lead to a natural human-machine interface for programming robotic systems especially in unstructured environments. In this proposal, we begin exploring the definition of a set of natural robotic interface commands used for navigation. A more natural navigational interface would allow robots to be more easily integrated into applications such as material handling for flexible manufacturing, planetary or underwater exploration, or automated wheelchairs. The programmer issues a command such as "go there" to the robot by specifying "there" as a location on a video screen. The robot then navigates to the desired location on a video screen. The robot then navigates to the desired location, avoiding any obstacles along the way. As a tool for discovering a complete set of navigational commands, we will implement and experiment with an initial set of commands. This experimentation should show where our initial command set is redundant or lacking and is essential for insuring that the commands are natural for the user. As part of this research, we will be developing a vision algorithm that can extract visually distinctive features from a wide variety of objects. The command set and the algorithms for feature extraction and tracking will be the largest contribution of this work. Future work involves further testing of the navigational commands on a convenient test platform and extending the methodology derived for discovering the primitive navigation commands to deriving manipulation commands.//