The goal of this project is to investigate the development of high-level concepts about objects and actions from low-level sensor input data. The project will take a developmental robotics approach: that is, it will use AI/robot models to investigate how concepts can be learned from an agent's own sensorimotor experience with its world. While it is possible to build some of this knowledge by hand, such knowledge tends to be incomplete and short-lived. Thus, for an autonomous agent to cope truly robustly with the complexity and diversity of real-world situations--and to be able to do so in more than short-lived robotic experiments--it is imperative for it to be able to use its own understanding of rich sensory input and motor actions to build its own models and concepts. In this paradigm, learning initiates with basic developmental learning to acquire and ground high-level concepts and then continues with life-long learning to adapt to changes in the world and in the robot's own capabilities. The initial thrust of this project will be to create a simulated baby robot model, which will then be evaluated. Once the baby robot is successful, the project will investigate whether the same methods will allow creation of sufficiently high-fidelity simulated models of apes or corvids (crows).