Prompted by surging interest and drive toward improved quality of inpatient care, there has been an explosive growth in the public reporting of hospital outcomes in recent years. A crucial element which has gone largely unexamined is the myriad of methodological variations and reporting metrics that underlie the reported outcomes. Our study objective is to a) draw attention to this important issue by performing side-by-side comparisons of the most commonly used statistical methods and metrics, and b) conduct a scientifically rigorous evaluation of the alternative methods and metrics using simulated data that mimic real world data. As we will use actual administrative discharge data - from Massachusetts (MA) and California (CA) - to examine outcomes of current public interest - inpatient/30-day mortality and 30-day readmissions for acute myocardial infarction (AMI), heart failure (HF) and pneumonia (PN) - the results of this study will be of direct and immediate relevance for a number of important ongoing public reporting initiatives. These include the Inpatient Quality Indicators (IQI) from the Agency for Healthcare Research &Quality (AHRQ) and HopitalCompare reporting from the Centers for Medicare &Medicaid Services (CMS) and Hospital Quality Alliance (HQA). To demonstrate the potential differences in hospital profiling arising from different methods and metrics we applied the methods used in obtaining IQIs and HospitalCompare to a common data set (all AMI discharges from MA, 2004-2008) and a common outcome (inpatient mortality). Out of the 14 hospitals ranked in the bottom quartile (highest inpatient mortality, risk adjusted), only 7 were common to both methods. Grouping all 57 hospitals into three categories - top quartile, bottom quartile, and interquartile - the overall concordance rate was a modest 58 percent (kappa=0.33;p<0.001). In the absence of a reference """"""""gold standard"""""""" it is impossible to determine which set of results are more accurate or reliable. In the proposed study we will develop simulated data sets that mimic real world data and perform side- by-side comparisons of the different methods and metrics commonly used today. With simulated data, """"""""true"""""""" hospital quality is predetermined by design, and therefore acts as the reference standard for estimating the accuracy and reliability of the different methods. As performance of the methods may vary with data characteristics, we will also develop a series of different simulated data sets each aimed at isolating important features of real world data, including hospital discharge volume (varying hospital volume keeping all other characteristics constant), outcome event rate (5% vs. 25% incidence rates of outcome), number of risk factors and magnitude of unobserved hospital effect.
Our specific aims are to 1) evaluate the accuracy and reliability of alternative statistical methods and metrics of hospital performance in simulated data, and 2) apply the higher performing methods and metrics to obtain hospital profiles for the selected outcomes (inpatient mortality, 30- day mortality and 30-day readmission) for the three admission cohorts (AMI, HF and PN) from MA and CA.
Prompted by surging interest and drive toward improved quality of inpatient care, there has been an explosive growth in the public reporting of hospital outcomes in recent years. A crucial element of this process, which has gone largely unremarked and unexamined, is the myriad of methodological variations and reporting metrics that underlie reported outcomes. Our study objective is to a) draw attention to this important issue by performing side-by-side comparisons of the most commonly used statistical methods and metrics using comprehensive discharge data from Massachusetts and California, and b) conduct a scientifically rigorous evaluation of the alternative methods and metrics using simulated data that mimic real world data.