Common but often overlooked threats to the validity of comparative effectiveness research (CER) studies include the misclassification or missingness of binary variables that are crucial to the ultimate analysis of the data. These variables potentially include the outcome of interest in standard or repeated measures logistic regression models, the factor (exposure) of interest, or an important confounder of the association under study. This proposal seeks to facilitate the investigation of the resulting biases to which a given CER analysis may be subject, and to provide study design-based remedial measures via which validity can be restored. The focus is upon statistical methods for conducting sensitivity analyses, as well as methods designed to make efficient use of supplemental data sources. The latter include validation data (in the case of misclassification), and so-called reassessment data (in the case of potentially informative missingness). A primary consideration throughout includes the incorporation of subject-specific covariates into the model of interest, as well as into models for the underlying misclassification or missingness process. Another primary goal is to establish a relatively consistent likelihood-based framework for all proposed analyses incorporating supplemental data, and to provide user-friendly programs utilizing common statistical software in order to make the methods broadly and readily accessible to those conducting CER. While not limited to specific applications, the proposed research draws motivation from and lends itself to illustration via two real-world studies. The first is the HIV Epidemiology Research Study (HERS), an observational cohort study in which the binary diagnosis of bacterial vaginosis was made at repeated visits via both error-prone and sophisticated assay techniques. The second is an emergency department-based ophthalmologic study in which non-dilated ocular fundus photography will be used for diagnosing serious ocular conditions, and will be compared against existing standard diagnostic methods. Both studies involve internal validation data to facilitate corrections for misclassification based on a fallible diagnostic method, and both are also subject to missing outcome and/or predictor data.
The goal of this project is to provide statistical methods to aid comparative effectiveness research (CER) investigators with common problems encountered in data analysis. The problems upon which the project focuses come about when binary (""""""""yes/no"""""""") data are subject to being incorrectly measured (misclassified), or when they are sometimes not observed (missing) for reasons that might relate to information about subjects in the study. The intention is to provide CER investigators with methods that are relatively easy to use, yet effective and powerful for combating these challenges to valid data analysis.
Showing the most recent 10 out of 13 publications