Much of the clinical information required for accurate clinical research, active decision support, and broad-coverage surveillance is locked in text files in an electronic medical record (EMR). The only feasible way to leverage this information for translational science is to extract and encode the information using natural language processing (NLP). Over the last two decades, several research groups have developed NLP tools for clinical notes, but a major bottleneck preventing progress in clinical NLP is the lack of standard, annotated data sets for training and evaluating NLP applications. Without these standards, individual NLP applications abound without the ability to train different algorithms on standard annotations, share and integrate NLP modules, or compare performance. We propose to develop standards and infrastructure that can enable technology to extract scientific information from textual medical records, and we propose the research as a collaborative effort involving NLP experts across the U.S. To accomplish this goal, we will address three specific aims:
Aim 1 : Extend existing standards and develop new consensus standards for annotating clinical text in a way that is interoperable, extensible, and usable.
Aim 2 : Apply existing methods and tools, and develop new methods and tools where necessary for manually annotating a set of publicly available clinical texts in a way that is efficient and accurate.
Aim 3 : Develop a publicly available toolkit for automatically annotating clinical text and perform a shared evaluation to evaluate the toolkit, using evaluation metrics that are multidimensional and flexible.
In this project, we will develop a publicly available corpus of annotated clinical texts for NLP research. We will experiment with methods for increasing the efficiency of annotation and will annotate de-identified reports of nine types for linguistic and clinical information. In addition, we will create an NLP toolkit that can be shared and will evaluate it against other NLP systems in a shared task evaluation with the community.
Showing the most recent 10 out of 11 publications