The search for relevant and actionable information is key to achieving clinical and research goals in biomedicine. Biomedical information exists in different forms: as text and illustrations in journal articles and other documents, in images stored in databases, and as patients cases in electronic health records. In the context of this work, an image includes not only biomedical images, but also illustrations, charts, graphs, and other visual material appearing in biomedical journals, electronic health records, and other relevant databases. The project objectives out may be formulated as seeking better ways to retrieve information from these entities, by moving beyond conventional text-based searching to combining both text and visual features in search queries. The approaches to meeting these objectives use a combination of techniques and tools from the fields of Information Retrieval (IR), Content-Based Image Retrieval (CBIR), and Natural Language Processing (NLP). The first objective is to improve the retrieval of biomedical literature by targeting the visual content in articles, a rich source of information not typically exploited by conventional bibliographic or full-text databases. We index these figures (including illustrations and images) using (i) text in captions and where they are mentioned in the body of the article (mentions), (ii) image features, such as color, shape, size, etc., and, if available, (iii) annotation markers within figures such as arrows, letters or symbols that are extracted from the image and correlated with concepts in the caption. These annotation markers can help isolate regions of interest (ROI) in images, the ROI being useful for improving the relevance of the figures retrieved. It is hypothesized that augmenting conventional search results with relevant images offers a richer search. Taking the retrieval of biomedical literature a step further, within the first objective our goal is to find information relevant to a patients case from the literature and then link it to the patients health record. The case is first represented in structured form using both text and image features, and then literature and EHR databases are searched for similar cases. A second objective is to find semantically similar images in image databases, an important step in differential diagnosis. We explore approaches that automatically combine image and text features in contrast to visual decision support systems (for example, VisualDx) that use only text driven menus. Such menu driven systems guide a physician to describe a patient and then present a set of images from which a clinician can select the ones most similar to the patients, and access relevant information manually linked to the images. Our methods use text and image features extracted from relevant components in a document, database, or case description to achieve our objectives. For the document retrieval task, we rely on the U.S. National Library of Medicine (NLM) developed search engine. This is a phrase-based search engine with NLMs Unified Medical Language System (UMLS) based term and concept query expansion and probabilistic relevancy ranking that exploits document structure. Optimizing these features, we create structured representations of every full-text document and all its figures. These structured documents presented to the user as search results include typical fields found in MEDLINE citations (e.g., titles, abstracts and MeSH terms), the figures in the original documents, and image-specific fields extracted from the original documents (such as captions segmented into parts pertaining to each pane in a multi-panel image, ROI described in each caption, and modality of the image). In addition, patient-oriented outcomes extracted from the abstracts are provided to the user. Automatic image annotation and retrieval objectives can be achieved in the following ways: (i) using image analysis alone;(ii) by indexing the text assigned to images;and (iii) using a combination of image and text analysis. One approach is to compute image similarity, the traditional CBIR task of finding images that are overall visually similar to a query image, using machine learning classifiers (e.g., Support Vector Machine) and fusion of class probabilities. These classifiers are trained on a variety of image features such as wavelets, edge histograms and those recommended by the MPEG-7 committee. Additional steps include describing an image by automatically detecting its modality (for example, CT, MR, X-ray, ultrasound, etc.) and generating a visual ontology, i.e., concepts assigned to image patches. Elements from the visual ontology are called visual keywords and are used to find images with similar concepts. To evaluate and demonstrate our techniques, we have developed the Image and Text Search Engine (ITSE), a hybrid system combining phrase-based searching with CEBs image similarity engine. Using this framework we explore alternative approaches to the problem of searching for information using a combination of visual and text features: (i) starting a text-based search of an image database, and refining the search using image features;(ii) starting a visual search using the (clinical) image of a given patient, and then linking the image to relevant information found by using visual and text features;and, (iii) merging the results of independent text and image searches. In an international evaluation our approaches were shown to be the best in two of three categories (image retrieval using only visual features, and medical case retrieval) and in the top four for ad-hoc biomedical information retrieval among over a dozen teams from around the world, including several from the industry.
Simpson, Matthew S; You, Daekeun; Rahman, Md Mahmudur et al. (2015) Literature-based biomedical image classification and retrieval. Comput Med Imaging Graph 39:3-13 |
Demner-Fushman, Dina; Antani, Sameer; Kalpathy-Cramer, Jayashree et al. (2015) A decade of community-wide efforts in advancing medical image understanding and retrieval. Comput Med Imaging Graph 39:1-2 |
Demner-Fushman, Dina; Kohli, Marc D; Rosenman, Marc B et al. (2015) Preparing a collection of radiology examinations for distribution and retrieval. J Am Med Inform Assoc : |
Kalpathy-Cramer, Jayashree; de Herrera, Alba GarcĂa Seco; Demner-Fushman, Dina et al. (2015) Evaluating performance of biomedical image retrieval systems--an overview of the medical image retrieval task at ImageCLEF 2004-2013. Comput Med Imaging Graph 39:55-61 |
Rahman, Md Mahmudur; Antani, Sameer K; Demner-Fushman, Dina et al. (2015) Biomedical image representation approach using visualness and spatial information in a concept feature space for interactive region-of-interest-based retrieval. J Med Imaging (Bellingham) 2:046502 |
Rahman, Md Mahmudur; Antani, Sameer K; Thoma, George R (2011) A learning-based similarity fusion and filtering approach for biomedical image retrieval using SVM classification and relevance feedback. IEEE Trans Inf Technol Biomed 15:640-6 |
Stanley, R Joe; De, Soumya; Demner-Fushman, Dina et al. (2011) An image feature-based approach to automatically find images for application to clinical decision support. Comput Med Imaging Graph 35:365-72 |
Simpson, Matthew S; Demner-Fushman, Dina; Thoma, George R (2010) Evaluating the Importance of Image-related Text for Ad-hoc and Case-based Biomedical Article Retrieval. AMIA Annu Symp Proc 2010:752-6 |
Demner-Fushman, Dina; Antani, Sameer; Simpson, Matthew et al. (2009) Annotation and retrieval of clinically relevant images. Int J Med Inform 78:e59-67 |