It can be difficult to tell which learning algorithm is best for a particular situation. The bias of many on-line learning algorithms can be encoded with a "distance function" on the hypothesis space, and (for simple problems) an amortized analysis can be used to provide relative performance guarantees (analogous to the competitive ratios for k-server problems). These guarantees can be used to indicate when one bias is more appropriate than another, allowing an intelligent choice of learning algorithm.
This project explores the connection between the distance functions over hypotheses and the Bregman divergences used in convex optimization. By adapting the tools developed for Bregman divergences, the investigators will improve and generalize the performance guarantees. This will result in more accurate predictions of how well the various learning algorithms will perform on a wider variety of problems. Furthermore, the improved understanding will enable the derivation of new learning algorithms tailored to specific situations.