This project develops systems that use machine learning to construct semantic analyzers for natural language by training on sentences paired only with their perceptual context. The PI's previous research developed systems that acquire semantic parsers by training on sentences annotated with formal meaning representations; however, the demands of building such annotated corpora limit the scope and accuracy of the resulting systems. This project extends these methods to learn language more like a human child, using only exposure to utterances in context. In order to temporarily circumvent the limitations of existing computer-vision and robotic systems, the project primarily studies the problem in simulated environments. It uses the Robocup soccer simulator as one domain in which to explore language acquisition. Existing methods for abstracting a description from the physical simulator state are used to construct a symbolic representation of the perceptual context. When learning from perceptual context instead of direct supervision, a system must address referential uncertainty, i.e. a sentence may refer to a multitude of different aspects of the current environment. Consequently, this project designs, implements, and evaluates algorithms that can learn from sentences paired only with ambiguous supervision. The effectiveness of the techniques developed are evaluated in experiments in the Robocup environment and other applications. The techniques developed can eventually be ported to real robots, allowing for an integration of language and perception in robotics. By increasing our understanding of how language can be acquired from its use in context, the project should also provide insight into human language learning.