This goal of this proposal is to extend ongoing work on learning with structured output spaces in the support-vector-machine (SVM) framework. Such structured output spaces arise in problems where the prediction is not a univariate response (e.g., yes/no), but a structured object (e.g., a sequence, tree, or alignment). While recent work has uncovered how to discriminatively learn prediction rules for simple structures with limited interdependencies, research is needed to extend these methods to the complex structures needed for many applications (e.g., machine translation). This project aims to extend the structural SVM framework to such complex structures. Specifically, it focuses on the required gains in computational efficiency, broader classes of loss functions, and the use of unlabeled data to improve statistical efficiency. As done in the past, the project plans to make available software implementations of the methods developed in the project. These will be made sufficiently robust and efficient so as to be suitable for real-world applications outside the machine learning research community as well as for classroom teaching. The project will apply its results to two high-impact areas like protein structure prediction or machine translation.