This award in the Joint NSF/DARPA Initiative on Image Understanding and Speech Recognition is for a study of the visual cues that humans need to guide vehicles and manipulate objects remotely. Telerobotic operators may be hampered by response delays, limited visual bandwidth, and insufficient time to attend to critical details. This study will identify essential visual cues that must be extracted or preserved to enable competent navigation, obstacle avoidance, landing, docking, and manipulation. Theories will be tested by human capabilities in synthesized dynamic visual environments.