Automatic annotation that uses expert input remains a challenging problem. The teams plans to use a mixture of methods from visual perception research, user-centered design, and image-content based methods. The application field will initially be categorization of biomedical images, which have potential for research and teaching, as well as clinical diagnosis. A closely integrated set of visual data, from users who also provide verbal data for production of annotation should information both image classification schemes and database organization. The project has two major aims: to combined visual perception research and user-centered design to do feature identification and develop appropriate metadata; and to design and build a usable, multimodal, gaze-aware content-based image retrieval system for interactive search and retrieval of multiply-classified images. Evaluation will include both retrieval performance and system usability metrics for both experts and novices. Graduate students will be an ongoing part of the research. Outcomes will be disseminated through journals and conferences, and data will be made available on a shared website.

Agency
National Science Foundation (NSF)
Institute
Division of Information and Intelligent Systems (IIS)
Type
Standard Grant (Standard)
Application #
0941452
Program Officer
Sylvia J. Spengler
Project Start
Project End
Budget Start
2009-09-15
Budget End
2013-08-31
Support Year
Fiscal Year
2009
Total Cost
$520,356
Indirect Cost
Name
Rochester Institute of Tech
Department
Type
DUNS #
City
Rochester
State
NY
Country
United States
Zip Code
14623