"CAREER: Penalty Logic for Structured Machine Learning" PI: Alan Fern Oregon State University

This research will study penalty logic as a knowledge representation technique for structured machine learning. Such learning problems involve inducing complex mappings between structured data types. Examples include learning to map American football video to play descriptions, and mapping the state of multi-agent planning problems to joint agent actions. Such problems often contain many "nearly sound" logical constraints, which are generally true, but sometimes violated. These constraints can be explicitly represented using penalty logic models, which are sets of weighted logical formulas, where each weight represents the cost of violating a formula. Penalty-logic models allow the synergistic combination of robust training methods for linear cost functions and years of work on logic-based representations. The project will study leveraging penalty logics in four directions: (1) learning model structure, (2) achieving practically efficient inference, (3) incorporating human provided knowledge, and (4) reducing labeling effort via active learning. The broader impact of this work will be to advance the applicability of structured machine learning to a wide range of interpretation and decision making problems, including those above. Planned educational activities include initiating an annual competition for Oregon high school students aimed at increasing CS enrollment and interest in AI.

Project Report

The grant has supported research in the Artificial Intelligence areas of automated planning and semantic video interpretation, with an emphasis on using weighted-logic--based representations. The grant also supported efforts in running the First Learning Track of the International Planning Competition, which has lead to resurgence of interest in the area and significant recent progress. Contributions to Automated Planning: The investigators studied new algorithms for allowing a computer to learn to perform better at automated planning tasks given experience in similar tasks. Examples of automated planning tasks include logistics planning, games like Solitaire, tactical military planning, etc. The main idea behind our approach is to analyze solutions to previously found plans for problems in some domain and to induce "control knowledge" that speeds-up the planner on future problems in that domain. The control knowledge was represented via a weighted logic and learning algorithms were developed to learn both the logical formulas and their weights. The work produced new theory on the fundamental learning problems and also produced algorithms that achieved state-of-the-art results. Another avenue of planning work supported by the grant was to study a recent Monte-Carlo planning approach called UCT. Work in this area resulted in the first applicaiton of UCT to real-time strategy games, which are is an extremely challenging testbed for AI algorithm. State-of-the-art results were achieved in the area of high-level tactical planning. The work also studied ensemble methods for UCT, showing that the performance of UCT can be improved by many fold via combining the results of multiple independent runs of the algorithm. Finally the grant supported undergraduates in the development of an infrastructure for real-time strategy game research in AI. The result is a game engine with a simple interface for AI agents. Contributions to Video Interpretation: This grant also supported the Digital Scout Project at Oregon State University where the aim is to create semantic interpretations of American football plays from raw video. This domain is intended to be a challenge problem for driving our work in video interpretation. The project resulted in some fundamental contributions in a number of areas: video rectification, multi-part object recognition, video panorama construction, and multi-object tracking.The work also created a data set of football plays with hand-tracked football player trajectories and semantic labels about the activity of each player. The most significant accomplishment was the development of a trainable player tracking approach, which achieved state-of-the-art performance on this extremely challenging multi-object tracking problem. Education: Throughout the span of the project two Ph.D. students were supported leading to their degrees and 6 undergradautes were supported, giving them valuable research experience.

Agency
National Science Foundation (NSF)
Institute
Division of Information and Intelligent Systems (IIS)
Application #
0546867
Program Officer
Edwina L. Rissland
Project Start
Project End
Budget Start
2006-02-01
Budget End
2012-01-31
Support Year
Fiscal Year
2005
Total Cost
$559,675
Indirect Cost
Name
Oregon State University
Department
Type
DUNS #
City
Corvallis
State
OR
Country
United States
Zip Code
97331