People are able to learn new concepts much faster than computers, often requiring only a handful of examples where a computer might require hundreds. This remarkable ability is partly the consequence of extensive experience with the world, resulting in strong prior knowledge about the kinds of objects that are likely to form categories. This research project bridges the gap between human and machine learning by developing probabilistic models of human category learning, connecting psychological data with the latest theories from computer science and statistics. These mathematical and computational models are used to explore how people learn categories so quickly, to capture the effects of prior knowledge on categorization, and to build a catalogue of human concepts that can be used to test psychological theories and to train machine learning systems. In each case, the research combines the ideas, methods, and sources of data used in psychology and computer science, using hierarchical Bayesian models and Markov chain Monte Carlo algorithms to model human cognition, laboratory experiments to test these models, and large databases as a source of statistical information that guides model predictions. This research program is integrated with an educational plan that incorporates undergraduate and graduate teaching and mentoring, development of a textbook on probabilistic models of cognition, tutorials and workshops aimed at increasing contact between the computer science and psychology communities, and outreach through talks and a website.

Project Report

Despite significant progress in artificial intelligence research over the last 60 years, people are still better than machines at solving certain problems - learning language, identifying causal relationships, and learning new concepts from a small number of examples. This project aimed to bridge this gap, starting on the human side: by studying how people learn concepts so well, and explaining this ability with mathematical models, we can develop new ideas that can improve the performance of machines. The goal of this project was to use a mathematical framework that has become increasingly popular in artificial intelligence research - probability theory - to capture human category-learning performance. The focus was on understanding the structure of human categories, and on exploring how people are capable of learning new categories easily. These questions were addressed using a combination of methods from cognitive psychology, Bayesian statistics, and computer science. The major results of this research include a new method for exploring the structure of human categories (based on algorithms developed by computer scientists), new techniques that allow machines to learn hierarchically-organized categories in a way that is similar to people, new hypotheses about how people might perform probabilistic inference, and two of the largest behavioral experiments ever conducted in the exploration of human concept learning. These experiments paint an unusually clear picture of the structure of human categories and test models of human learning on a scale that goes far beyond the standard laboratory setting. These scientific findings were disseminated in over 30 scientific publications and presented to audiences ranging from neuroscientists to computer scientists. In addition to achieving these scientific goals, this project resulted in the creation of a new class on the latest ideas in probabilistic models of cognition that allowed over 50 students to work on independent research projects in this area, and has supported the development of a book and website on this topic.

Agency
National Science Foundation (NSF)
Institute
Division of Information and Intelligent Systems (IIS)
Application #
0845410
Program Officer
James Donlon
Project Start
Project End
Budget Start
2009-03-01
Budget End
2014-02-28
Support Year
Fiscal Year
2008
Total Cost
$556,847
Indirect Cost
Name
University of California Berkeley
Department
Type
DUNS #
City
Berkeley
State
CA
Country
United States
Zip Code
94704