Background: Failure to follow up abnormal test results is a significant safety concern in outpatient settings and often leads to patient harm and malpractice claims. Electronic health records (EHRs) can help ensure reliable delivery of abnormal test results, but they do not guarantee that this results in appropriate follow-up action. Our work in the Veterans Health Administration (VA) reveals that almost 8% of abnormal outpatient test results transmitted as EHR-based alerts lacked follow-up at 4 weeks. We subsequently found that follow-up of abnormal tests is influenced by multitude of technological factors (software/hardware) and non-technological factors (user behaviors, workflow, information load, policies and procedures, training and other organizational factors). Improving test result follow-up will require a better understanding of how follow-up processes fit within the complex """"""""socio-technical"""""""" context of EHR-enabled health care. It is especially important to clarify how these contextual features influence the cognitive processes that are necessary to perceive, comprehend, and act on abnormal findings in a timely manner. Given that laboratory test result reporting is a component of Stage 2 meaningful use, further exploration of vulnerabilities in EHR-based test result follow-up is imperative. Objectives/Methods: We propose to apply human factors-based frameworks to understand system and cognitive vulnerabilities that affect EHR-based outpatient test result follow-up. To better define the context of clinical work that affects decision-making in this area, we will use a conceptual model that posits a set of eight socio-technical dimensions that must be considered in the real-world use of IT. Building on our prior work in the VA, our study settings include clinics affiliated with 3 non-VA institutions in order to improve generalizability.
In Aim 1, we will identify the cognitive factors that affect test result follow-up processes in EHR-based health systems. We will conduct record reviews to identify recent abnormal test results with and without timely follow- up and conduct cognitive task analysis interviews with providers who ordered the tests. We will also assess the cognitive load of EHR-based alerts related to test results.
In Aim 2, we will characterize the nature of clinical work required for individuals and teams to respond appropriately to abnormal test results in EHR-enabled outpatient settings. To map these processes at each site, we will collect qualitative data using rapid assessment techniques (structured observations, brief surveys, and key informant interviews). Our interpretation of these data will include consideration of how different socio-technical factors (e.g. EHR design, workflow, and organizational factors) interact and affect the cognitive work of test result follow-up.
In Aim 3, we will conduct prospective risk assessments to characterize the particular work processes and features of the socio-technical context that are most vulnerable to failure within and across our study sites. This foundational work will lead to better understanding of the """"""""basic science"""""""" of missed test results and will clarify targets for future interventions to improve follow-up of abnormal test results in EHR-enabled outpatient settings.

Public Health Relevance

A significant number of patients with abnormal test results fall through the cracks of the health care system and experience delays in diagnosis and treatment. Although electronic health records enhance the communication of abnormal test results, they do not guarantee the prompt follow-up that is required for timely care. We propose to study test result follow-up practices across healthcare institutions that use various electronic health record systems to understand why abnormal test results are missed.

National Institute of Health (NIH)
Agency for Healthcare Research and Quality (AHRQ)
Research Project (R01)
Project #
Application #
Study Section
Special Emphasis Panel (HSQR)
Program Officer
Chaney, Kevin J
Project Start
Project End
Budget Start
Budget End
Support Year
Fiscal Year
Total Cost
Indirect Cost
Baylor College of Medicine
Internal Medicine/Medicine
Schools of Medicine
United States
Zip Code
Barbieri, Andrea Lynne; Fadare, Oluwole; Fan, Linda et al. (2018) Challenges in Communication from Referring Clinicians to Pathologists in the Electronic Health Record Era. J Pathol Inform 9:8
Bhise, Viraj; Sittig, Dean F; Vaghani, Viralkumar et al. (2018) An electronic trigger based on care escalation to identify preventable adverse events in hospitalised patients. BMJ Qual Saf 27:241-246
Rinke, Michael L; Singh, Hardeep; Heo, Moonseong et al. (2018) Diagnostic Errors in Primary Care Pediatrics: Project RedDE. Acad Pediatr 18:220-227
Millenson, Michael L; Baldwin, Jessica L; Zipperer, Lorri et al. (2018) Beyond Dr. Google: the evidence on consumer-facing digital tools for diagnosis. Diagnosis (Berl) 5:95-105
Kwan, Janice L; Singh, Hardeep (2017) Assigning responsibility to close the loop on radiology test results. Diagnosis (Berl) 4:173-177
Singh, Hardeep; Schiff, Gordon D; Graber, Mark L et al. (2017) The global burden of diagnostic errors in primary care. BMJ Qual Saf 26:484-494
Bhise, Viraj; Meyer, Ashley N D; Singh, Hardeep et al. (2017) Errors in Diagnosis of Spinal Epidural Abscesses in the Era of Electronic Health Records. Am J Med 130:975-981
Murphy, Daniel R; Meyer, Ashley N D; Vaghani, Viralkumar et al. (2017) Application of Electronic Algorithms to Improve Diagnostic Evaluation for Bladder Cancer. Appl Clin Inform 8:279-290
Meyer, Ashley N D; Singh, Hardeep (2017) Calibrating how doctors think and seek information to minimise errors in diagnosis. BMJ Qual Saf 26:436-438
Singh, Hardeep; Graber, Mark L; Hofer, Timothy P (2016) Measures to Improve Diagnostic Safety in Clinical Practice. J Patient Saf :

Showing the most recent 10 out of 39 publications