Statistical methods for evaluating diagnostic tests and biomarkers fall far behind existing methods for evaluating therapeutic drugs and etiologic epidemiology studies. This is a revised competitive renewal of a grant that in previous cycles has forged new fundamental methodology aimed at bringing biomarker evaluation to a level comparable with other areas of research. The 5-phase paradigm for biomarker development (Pepe et al JNCI2001) and the ROC-GLM regression framework (Pepe Biometrika 1997) are examples. In the next grant cycle we propose to again tackle basic issues in biomarker study design and analysis that have never been addressed before. These include:
in Aim 2 (a), evaluating the implications and advantages of matching cases to controls in regards to covariates, and selection of the optimal case-control ratio in designing a matched biomarker study;
in Aim 2 (b) developing methods for estimating the ROC derivative with implications for making inference in data analysis and for sample size and case control ratio calculations in study design. We propose a conceptually new and intuitive approach to biomarker evaluation in Aim 3 that uses the controls simply as a reference distribution for standardizing marker values. Traditionally difficult problems in biomarker evaluation such as covariate adjustment, examining equivalence of markers and event time outcome data are easily handled in this conceptual framework. A graphical display that describes the distribution of risk in the population is proposed in Aim 4 for the evaluation of risk prediction markers and models. This provides clinically meaningful descriptions of predictiveness that have not been attributes of standard predictiveness measures. Applications in cancer biomarker development provide a context for our research. Data from the Early Detection Research Network and from several large cohort studies including the Physicians' Health Study and the Prostate Cancer Prevention Trial will be analyzed. Relative to our previous submission, Aims 1 and 5 have been eliminated while aims 2, 3 and 4 have been elaborated upon and expanded. ? ? ?

Agency
National Institute of Health (NIH)
Institute
National Institute of General Medical Sciences (NIGMS)
Type
Research Project (R01)
Project #
2R01GM054438-11A1
Application #
7265055
Study Section
Biostatistical Methods and Research Design Study Section (BMRD)
Program Officer
Remington, Karin A
Project Start
1996-05-01
Project End
2011-04-30
Budget Start
2007-05-10
Budget End
2008-04-30
Support Year
11
Fiscal Year
2007
Total Cost
$430,628
Indirect Cost
Name
Fred Hutchinson Cancer Research Center
Department
Type
DUNS #
078200995
City
Seattle
State
WA
Country
United States
Zip Code
98109
Kerr, Kathleen F; Brown, Marshall; Janes, Holly (2017) Reply to A.J. Vickers et al. J Clin Oncol 35:473-475
Kim, Soyoung; Huang, Ying (2017) Combining biomarkers for classification with covariate adjustment. Stat Med 36:2347-2362
Kerr, Kathleen F; Brown, Marshall D; Zhu, Kehao et al. (2016) Assessing the Clinical Impact of Risk Prediction Models With Decision Curves: Guidance for Correct Interpretation and Appropriate Use. J Clin Oncol 34:2534-40
Fong, Youyi; Yin, Shuxin; Huang, Ying (2016) Combining biomarkers linearly and nonlinearly for classification using the area under the ROC curve. Stat Med 35:3792-809
Pepe, Margaret S; Janes, Holly; Li, Christopher I et al. (2016) Early-Phase Studies of Biomarkers: What Target Sensitivity and Specificity Values Might Confer Clinical Utility? Clin Chem 62:737-42
Huang, Ying (2016) Evaluating and comparing biomarkers with respect to the area under the receiver operating characteristics curve in two-phase case-control studies. Biostatistics 17:499-522
Huang, Ying; Laber, Eric (2016) Personalized Evaluation of Biomarker Value: A Cost-Benefit Perspective. Stat Biosci 8:43-65
Huang, Ying; Laber, Eric B; Janes, Holly (2015) Characterizing expected benefits of biomarkers in treatment selection. Biostatistics 16:383-99
Pepe, Margaret Sullivan (2015) Response. J Natl Cancer Inst 107:356
Pepe, Margaret S; Fan, Jing; Feng, Ziding et al. (2015) The Net Reclassification Index (NRI): a Misleading Measure of Prediction Improvement Even with Independent Test Data Sets. Stat Biosci 7:282-295

Showing the most recent 10 out of 74 publications