This project uses machine learning to accelerate the execution of a class of computer programs relevant to AI. Given a program and a class of inputs, the new methods automatically seek execution strategies that are fast while still achieving a high level of accuracy.

The project focuses on the main inference algorithms that underlie statistical AI: dynamic programming, belief propagation, Markov chain Monte Carlo, and backtracking search. Each of these inference algorithms faces an enormous search space, iteratively extending or refining its picture of this space. Each algorithm must continually choose which computational step to take next.

The opportunity is to learn a strategy for making these choices. Some choices are on the "critical path" and help the system find an accurate output, while others lead mainly to wasted work. The learned strategy for evaluating choices in context may itself be computationally intensive, so the method learns to speed that up as well, within the same framework.

The project will disseminate software and will have broader impact on several fields. The targeted algorithms are central to natural language processing, speech processing, machine vision, computational biology, health informatics and music processing. Their ability to form a coherent global analysis of a set of observations is a hallmark of intelligence, and will enable artificial systems that aid human understanding and performance. Speeding them up is critical as researchers develop increasingly sophisticated statistical models. Furthermore, the learning methodologies developed will be useful in other settings that attempt to learn computational or behavioral strategies.

Project Report

Building more intelligent computer systems means asking our computers to do deeper reasoning over more data. Unfortunately, the more we ask of our systems, the slower they are. The good news is that we can turn AI's learning techniques back on the very problem of speeding up the system. Our project devised ways for computers to discover shortcuts. Humans are not perfect reasoners, but humans make good decisions most of the time. A skilled human such as a chessmaster has acquired a mental playbook that covers what to do in a large number of common circumstances. In reasoning about the world, a human can make reasonable guesses from relevant and easily available evidence, without considering every datum or reasoning through every possible consequence. In the same way, our systems are able to learn cheap tricks that bypass or approximate portions of the reasoning process, or rely on superficial cues, so long as the results remain sufficiently accurate. To build such a system, we first define a space of possible "policies" that a system can follow to do approximate reasoning or decision-making. We then explicitly train the system to optimize a user-specified combination of speed and accuracy, rather than accuracy alone as is traditional. In other words, we search the space of possible policies for an actual policy that achieves a good tradeoff. In general, this is a difficult and computationally intensive search problem.

Agency
National Science Foundation (NSF)
Institute
Division of Information and Intelligent Systems (IIS)
Application #
0964681
Program Officer
Tatiana D. Korelsky
Project Start
Project End
Budget Start
2010-08-15
Budget End
2014-07-31
Support Year
Fiscal Year
2009
Total Cost
$899,976
Indirect Cost
Name
Johns Hopkins University
Department
Type
DUNS #
City
Baltimore
State
MD
Country
United States
Zip Code
21218