Medical diagnostic testing and disease screening are costly both financially and in human terms. It is important that statistical properties of a diagnostic or screening test be well characterized by research studies prior to its adoption for routine practice. Current statistical methods for evaluating diagnostic tests are limited. This proposal seeks to develop statistical methods which will allow a greater range of study designs to be used and allow a greater range of research questions to be addressed than is possible at present. We consider diagnostic tests where test results may be on a dichotomous (positive versus negative), ordinal or continuous scale. Our proposal has four aims.
In Aim number 1, we will develop methods for combining the results of several diagnostic tests together in order to define a new, more accurate test.
Aim number 2 is concerned with the problem of evaluating diagnostic tests when no gold standard definitive test exists against which the new test can be compared. Commonly used clinically relevant measures of test accuracy which are only defined for binary tests, will be extended for use with continuous or ordinal tests in Aim number 3.
Our final aim i s to further develop the regression framework for ROC (receiver operating characteristic) curves which was formulated during the past grant cycle of this project. This project has access to a wide variety of real datasets which will guide development of new statistical methodology. These include for example (a) data from a multicenter newborn hearing screening project; (b) longitudinal data on serum levels of prostate specific antigen (PSA); (c) data from a mammography reading study; and (d) data on multiple laboratory tests for chlamydia trachomatis.
Our aims will require in most cases, development of large sample distribution theory, small sample simulation studies and application to real data. Software to implement analyses will use standard statistical packages when possible and will be fully documented so that it can be shared with colleagues.
|Kerr, Kathleen F; Brown, Marshall; Janes, Holly (2017) Reply to A.J. Vickers et al. J Clin Oncol 35:473-475|
|Kim, Soyoung; Huang, Ying (2017) Combining biomarkers for classification with covariate adjustment. Stat Med 36:2347-2362|
|Huang, Ying; Laber, Eric (2016) Personalized Evaluation of Biomarker Value: A Cost-Benefit Perspective. Stat Biosci 8:43-65|
|Kerr, Kathleen F; Brown, Marshall D; Zhu, Kehao et al. (2016) Assessing the Clinical Impact of Risk Prediction Models With Decision Curves: Guidance for Correct Interpretation and Appropriate Use. J Clin Oncol 34:2534-40|
|Fong, Youyi; Yin, Shuxin; Huang, Ying (2016) Combining biomarkers linearly and nonlinearly for classification using the area under the ROC curve. Stat Med 35:3792-809|
|Pepe, Margaret S; Janes, Holly; Li, Christopher I et al. (2016) Early-Phase Studies of Biomarkers: What Target Sensitivity and Specificity Values Might Confer Clinical Utility? Clin Chem 62:737-42|
|Huang, Ying (2016) Evaluating and comparing biomarkers with respect to the area under the receiver operating characteristics curve in two-phase case-control studies. Biostatistics 17:499-522|
|Pepe, Margaret Sullivan (2015) Response. J Natl Cancer Inst 107:356|
|Pepe, Margaret S; Fan, Jing; Feng, Ziding et al. (2015) The Net Reclassification Index (NRI): a Misleading Measure of Prediction Improvement Even with Independent Test Data Sets. Stat Biosci 7:282-295|
|Pepe, Margaret S; Li, Christopher I; Feng, Ziding (2015) Improving the quality of biomarker discovery research: the right samples and enough of them. Cancer Epidemiol Biomarkers Prev 24:944-50|
Showing the most recent 10 out of 74 publications