Automatic annotation that uses expert input remains a challenging problem. The teams plans to use a mixture of methods from visual perception research, user-centered design, and image-content based methods. The application field will initially be categorization of biomedical images, which have potential for research and teaching, as well as clinical diagnosis. A closely integrated set of visual data, from users who also provide verbal data for production of annotation should information both image classification schemes and database organization. The project has two major aims: to combined visual perception research and user-centered design to do feature identification and develop appropriate metadata; and to design and build a usable, multimodal, gaze-aware content-based image retrieval system for interactive search and retrieval of multiply-classified images. Evaluation will include both retrieval performance and system usability metrics for both experts and novices. Graduate students will be an ongoing part of the research. Outcomes will be disseminated through journals and conferences, and data will be made available on a shared website.