Teleoperated assistive robots in home environments have the potential to dramatically improve quality of life for older adults and/or people who experience disabling circumstances due to chronic or acute health conditions. It could similarly aid clinicians and healthcare professionals providing treatment. The success of these applications though will critically depend on the ease with which a robot can be commanded to perform common manipulation tasks within a home environment. Thus, the focus of the proposed research is to addresses this key challenge in two significant ways. First, by learning from teleoperated manipulation (i.e. teleoperative-based instruction), robots can acquire the ability to perform elements of common tasks with greater autonomy and reliability. Second, by automatically mapping new modalities (e.g. voice and gesture commands) to the robot's user interface, a wider variety of people will be able to use the robot more easily. The resulting multimodal interfaces may be especially important for people who have difficulty using a single modality, such as vision. These two fundamental research components underlie the basis of our approach to enable manipulation of everyday objects in an unstructured human environment.