The aim of this research is to advance the ability of a search engine to understand a user's query seeking multimedia data, to speed up machine-learning algorithms for comprehending a query, and to index high-dimensional imagery data to permit fast matching of found data to a query concept. This study comprises three thrusts: multimodal active learning, scalable kernel machines, and kernel indexing. The first thrust explores ways to profile the complexity of a query concept and ways a concept can be learned using information from image context, image content, text, and camera parameters. The second thrust investigates approximate factorization algorithms and parallel algorithms to speed up kernel machines such as Support Vector Machines (SVMs) and kernel PCA. The third thrust devises indexing algorithms to work with the kernel methods in a potentially infinite dimensional space. Together, these three integrated research thrusts provide a solid foundation for building large-scale, next-generation, multimedia information retrieval systems. Speeding up the kernel methods in both training and indexing is critical for making learning feasible in real time and on a large scale. Broader impacts of this work are expected to be very significant because a variety of applications depend on high-performance kernel methods to scale up to larger databases. The expected results of this research include: a faster version of SVMs, a kernel-indexing algorithm, and a large-scale development of an image-sharing and image-search engine. These results will be disseminated via open-source software or World Wide Web services via the project Web site (www.mmdb.ece.ucsb.edu/~echang/IIS-0535085.html).