This project seeks to investigate fundamental questions about how to define and represent concepts, particularly in contexts where robots need to identify and use objects. This project focuses on representations that result from observations about object usage. The underlying assumption is that usage reveals fundamental information about the nature of objects. The fundamental question to be addressed in this project is: How can concepts be defined in terms of multiple representations for the purpose of effective recognition? This project will investigate how context and usage can allow recognition that might not be possible when an object is observed in isolation, and how the enormous variation in the visual appearance of objects can be accounted for through combination of functional definitions and simple visual descriptions. The project is organized around three tasks: (1) investigation of multi-faceted knowledge representation techniques that include exemplar-based and usage-based representations, and the use of weighted (probabilistic) representations; (2) multi-representation recognition algorithms that allow for probabilistic balancing of evidence and weighted representations and the combining of exemplar-based and non-exemplar-based representations; and (3) parameter and structure learning that allows adaptation of weights used in concept representations and definitions. The project will use mobile robots equipped with various sensors as a test bed. In addition to its own sensory input, the robot will use observations of humans interacting with objects in its environment. The project will build on the PIs ongoing work on object definition and recognition techniques that use visual features and functional representations. This project has potential Broader Impact not only on robotics and artificial intelligence but also on cognitive science and psychology in general.