Background: VA performance monitoring makes extensive use of diagnosis-based quality measures that track delivery of care only among patients who have qualifying ICD-9 diagnosis codes. Diagnosis-based measures can be calculated using existing VA data, allowing for low-cost, near real-time performance monitoring. However, diagnosis-based measures can have critical validity problems if the targeted condition is under- or over-diagnosed to differing degrees across facilities. When variation is diagnosing and coding occurs, facility rankings on measured performance can be misleading: High performing facilities can score poorly, low performing facilities can score well, and facilities with the same real performance can fall at opposite ends of the facility rank distribution. Use of diagnosis-based process measures can therefore undermine one of the primary purposes of quality measurement: The comparison of facilities and systems. In addition, diagnosis- based measures cannot be used to detect gaps in access to care for patients who have a targeted condition but no qualifying diagnosis code. Finally, when diagnosis rates vary across patient subgroups, diagnosis-based measures cannot be used to detect and act on healthcare disparities. Problems with diagnosis-based measures could be remedied if true prevalence data were available: Comparisons of performance based on diagnosis- versus prevalence-based measures would detect facilities with anomalous diagnosis rates and distinguish variation in true performance from variation in case-finding. However, for many conditions, the electronic health record (EHR) does not contain data on true prevalence. Objectives: The goal of the proposed project is to develop a general method for improving diagnosis-based measures when valid prevalence data are not readily available. We propose to build a model for predicting prevalence using multiple sources of existing data and to validate it through a one-time collection of gold standard outcome data (survey-based SUD prevalence). Leveraging existing data with targeted collection of model development and validation data is a cost-effective strategy to improve diagnosis-based measures without requiring ongoing, expensive disease surveillance. Focusing on substance use disorder (SUD) care as an example, the objectives of this study are to: (a) assess the degree of SUD under- or over-diagnosis by comparing the proportion of patients with coded SUD diagnoses in the VA administrative data to SUD prevalence estimates obtained using a validated measure in a patient survey conducted at 30 VA healthcare systems; (b) refine and validate a model for predicting SUD prevalence among VA patients using multiple existing data sources; and (c) assess disparities in SUD diagnosis by comparing diagnosis rates to survey- based SUD prevalence estimates across patient age, sex, and racial/ethnic groups. Methods: We will collect data on DSM-IV and DSM-5-concordant SUD among VA patients using a validated instrument. We will conduct telephone interviews with patients at 30 VA healthcare systems selected based on geographic region and expected differences between observed SUD diagnosis and true SUD prevalence. We will compare observed diagnosis rates to survey-based prevalence estimates. We will refine a prototype SUD prediction model using as inputs population SUD surveillance data for Veterans from the National Surveys on Drug Use and Health, EHR data from VA Corporate Data Warehouse, and organizational survey data from the VA Drug and Alcohol Program Survey. The model will be developed and validated using survey-based SUD prevalence as the outcome. We will fit the model using traditional methods and more modern machine learning algorithms and will select a final model based on established criteria for predictive validity. We will compute facility performance rankings using diagnosis rates versus predicted prevalence to assess the extent to which variation in performance may reflect variation in diagnosis or coding. Finally, we will assess possible disparities in diagnosing by comparing the gap between diagnosis and estimated prevalence across patient groups.

Public Health Relevance

Monitoring healthcare delivery in a system as large and complex as the VA is a challenge. VA performance monitoring makes extensive use of healthcare measures intended to track the delivery of care to all eligible patients who might benefit from it. However, many measures instead track care only to those patients who have qualifying ICD-9 codes in VA administrative data. When there are systematic errors in coded diagnoses, performance monitoring that relies on diagnosis-based measures may fail to identify facilities that need remediation or, conversely, may lead to misallocation of resources to remediate care in facilities that are actually performing well. Diagnosis-based measures also have a critical blind spot, making patients who have the targeted condition but no coded diagnosis essentially invisible to the performance monitoring system. We propose a method to improve existing diagnosis-based measures, which will in turn support VA efforts to ensure delivery of guideline-concordant care to all patients who may benefit from it.

Agency
National Institute of Health (NIH)
Institute
Veterans Affairs (VA)
Type
Non-HHS Research Projects (I01)
Project #
1I01HX002128-01A1
Application #
9189567
Study Section
Healthcare Informatics Special Emphasis Panel (HS3A)
Project Start
2017-01-01
Project End
2019-12-31
Budget Start
2017-01-01
Budget End
2017-12-31
Support Year
1
Fiscal Year
2018
Total Cost
Indirect Cost
Name
VA Greater Los Angels Healthcare System
Department
Type
DUNS #
066689118
City
Los Angeles
State
CA
Country
United States
Zip Code
90073