Ensemble methods are general techniques in machine learning for combining several hypotheses to create a more accurate predictor. In the batch learning setting, techniques such as bagging, boosting, stacking, error-correction techniques, Bayesian averaging, or other averaging schemes are common instances of these methods. These methods often significantly improve performance in practice and often benefit from favorable learning guarantees, typically in terms of the margins of the training samples.

However, ensemble methods and their theory have been developed primarily for the common binary classification problem, or standard regression tasks where the target labels are real numbers and thus have no structure. These techniques do not readily apply to structured prediction problems such as pronunciation modeling, speech recognition, parsing, machine translation, or image processing. The objective of this proposal is to create the theoretical foundation, large-scale algorithms, and practical techniques for devising effective ensembles of structured prediction techniques. The benefits of these algorithms are likely to be at least as significant as those resulting from ensemble techniques in binary classification.

Our solutions will be crucial to a broad set of applications and will be made widely accessible through open-source software programs. These software and open-source programs will make the use of our learning algorithms accessible to a broad community of researchers and engineers. More broadly, our techniques will benefit the society through the discovery of significantly more accurate solutions to a variety of important problems including speech recognition, speech synthesis, and machine translation.

Project Start
Project End
Budget Start
Budget End
Support Year
Fiscal Year
Total Cost
Indirect Cost
New York University
New York
United States
Zip Code