The purpose of this investigation was to determine how an human-like robot, with two cameras functioning as eyes, can learn to improve its guesses of how far an object is from itself, without the aid of a human teacher to measure the distance. Our hypothesis is that a robot system can improve the accuracy of its guesses by moving its head, arms, or the rest of its body to find out how different actions modify what the cameras see. It is well known that the distance to an object can be estimated from two images taken from slightly different vantage points (such as the distance between the left and right eye). However, to estimate distance using two images, the robot system needs to know several properties of the cameras used to capture the images. Our study investigated how a robot can learn information about its cameras by performing its own experiments. We found that a robot system can learn some of these camera properties by performing simple actions (such as rotating its neck while watching the same object). Actions that do not alter the physical distance, but cause a perceptual difference, reveal errors in the assumptions made about the cameras used for depth estimation. These errors can be used to learn the true camera properties. An important and unique benefit of the NSF East Asia & Pacific Summer Institutes (EAPSI), is that it enabled extended on site research collaboration with the Artificial Brain Research (ABR) Laboratory at Kyungpook National University in South Korea, led by Professor Minho Lee. The ABR laboratory has developed a detailed model of visual attention that allowed us to bypass several technical issues and work directly on evaluating our hypothesis. Without support from the EAPSI program this level of collaboration would not have been possible.