The long-term goal of this project is to enable robot manipulation in the unstructured and uncertain environments of the home and the workplace. To operate robustly and effectively when there is uncertainty, a robot must be able to reason explicitly about how uncertain it is, and potentially choose actions that will yield sensory information that will reduce the uncertainty. This project addresses the problem of reasoning about uncertainty by using a probability distribution over possible underlying states of the world to represent its current ``belief'', and by choosing actions in virtue of the effects they will have on the robot's belief about the state of the world. Robot manipulation problems will be formulated as partially observable Markov decision processes (POMDPs) to generate effective strategies for manipulation under uncertainty.