It can be difficult to tell which learning algorithm is best for a particular situation. The bias of many on-line learning algorithms can be encoded with a "distance function" on the hypothesis space, and (for simple problems) an amortized analysis can be used to provide relative performance guarantees (analogous to the competitive ratios for k-server problems). These guarantees can be used to indicate when one bias is more appropriate than another, allowing an intelligent choice of learning algorithm.

This project explores the connection between the distance functions over hypotheses and the Bregman divergences used in convex optimization. By adapting the tools developed for Bregman divergences, the investigators will improve and generalize the performance guarantees. This will result in more accurate predictions of how well the various learning algorithms will perform on a wider variety of problems. Furthermore, the improved understanding will enable the derivation of new learning algorithms tailored to specific situations.

Agency
National Science Foundation (NSF)
Institute
Division of Computer and Communication Foundations (CCF)
Application #
9821087
Program Officer
David Du
Project Start
Project End
Budget Start
1999-09-01
Budget End
2003-08-31
Support Year
Fiscal Year
1998
Total Cost
$300,237
Indirect Cost
Name
University of California Santa Cruz
Department
Type
DUNS #
City
Santa Cruz
State
CA
Country
United States
Zip Code
95064