Biomarker development is a high priority area of research for NIH. However, standards for statistical design and evaluation of biomarker studies are not well developed, especially when compared with other medical fields such as therapeutics and epidemiology. This grant proposes to develop statistical methods for several important problems in biomarker research focusing on use of biomarkers for screening, diagnosis, prognosis and risk prediction.
Aim 1 concerns improvement in prediction (or detection) of disease when a novel biomarker is added to an existing set of clinical predictors/markers. We will investigate: (i) The statistical characteristics of a new marker relative to existing predictors that lead to large versus small improvements in performance. Practical implications will be developed for selecting marker panels for validation from large sets of candidate biomarkers in discovery research;(ii) Techniques for making statistical inference about measures that quantify improvement in performance, improving upon existing methods that are flawed;and (iii) Designs for studies to estimate improvement in performance with a focus on the practice of choosing controls to match cases in regards to clinical predictors or existing markers. We will develop methods for estimation with a matched design and investigate if the design leads to more efficient use of data. The goal of aim 2 is to develop a framework for simultaneously estimating relative risks associated with a biomarker and its performance for classification. These two tasks are currently done separately leading to disjointed, and sometimes inconsistent, results. In addition to providing a more coherent approach to analysis, we believe that wide availability and familiarity of methods for relative risk regression will enable researchers to do more sophisticated analyses of biomarker performance than is currently done in practice. In particular, we propose to develop methods in this framework for comparing biomarkers, for evaluating factors that affect biomarker performance and for meta-analysis that combines data from different studies or subpopulations. Methods will be evaluated using cancer biomarker datasets from collaborations with investigators in the Early Detection Research Network, as well as collaborations on HIV research, emergency medicine research and fertility research. Mathematical theory and simulation models will be used to evaluate and compare statistical techniques. Software will be written in Stata and R statistical packages and will be made available via our Diagnostics and Biomarker Statistical Center website, http://labs.fhcrc.org/pepe/dabs/.

Public Health Relevance

Biomarker development is a high priority area of research. However, standards for statistical design and evaluation of biomarker studies are not well developed, especially when compared with other medical fields such as therapeutics and epidemiology. This project seeks to develop statistical methods for several important problems in biomarker research focusing on use of biomarkers for disease screening, diagnosis, prognosis and risk prediction. Rigorous and efficient evaluation of biomarkers will enable patients and their healthcare providers to make better use of biomarker information in making medical decisions.

Agency
National Institute of Health (NIH)
Institute
National Institute of General Medical Sciences (NIGMS)
Type
Research Project (R01)
Project #
5R01GM054438-17
Application #
8649047
Study Section
Biostatistical Methods and Research Design Study Section (BMRD)
Program Officer
Sheeley, Douglas
Project Start
1996-05-01
Project End
2016-03-31
Budget Start
2014-04-01
Budget End
2015-03-31
Support Year
17
Fiscal Year
2014
Total Cost
$333,535
Indirect Cost
$138,535
Name
Fred Hutchinson Cancer Research Center
Department
Type
DUNS #
078200995
City
Seattle
State
WA
Country
United States
Zip Code
98109
Pepe, Margaret S; Janes, Holly; Li, Christopher I (2014) Net risk reclassification p values: valid or misleading? J Natl Cancer Inst 106:dju041
Kerr, Kathleen F; Wang, Zheyu; Janes, Holly et al. (2014) Net reclassification indices for evaluating risk prediction instruments: a critical review. Epidemiology 25:114-21
Janes, Holly; Pepe, Margaret S; Huang, Ying (2014) A framework for evaluating markers used to select patient treatment. Med Decis Making 34:159-67
Bansal, Aasthaa; Pepe, Margaret Sullivan (2013) Estimating improvement in prediction with matched case-control designs. Lifetime Data Anal 19:170-201
Zheng, Yingye; Cai, Tianxi; Pepe, Margaret S (2013) Adopting nested case-control quota sampling designs for the evaluation of risk markers. Lifetime Data Anal 19:568-88
Bansal, Aasthaa; Pepe, Margaret Sullivan (2013) When does combining markers improve classification performance and what are implications for practice? Stat Med 32:1877-92
Seymour, Christopher W; Cooke, Colin R; Wang, Zheyu et al. (2013) Improving risk classification of critical illness with biomarkers: a simulation study. J Crit Care 28:541-8
Huang, Ying; Pepe, Margaret S; Feng, Ziding (2013) LOGISTIC REGRESSION ANALYSIS WITH STANDARDIZED MARKERS. Ann Appl Stat 7:
Pepe, Margaret Sullivan; Fan, Jing; Seymour, Christopher W (2013) Estimating the receiver operating characteristic curve in studies that match controls to cases on covariates. Acad Radiol 20:863-73
Pepe, Margaret Sullivan; Kerr, Kathleen F; Longton, Gary et al. (2013) Testing for improvement in prediction model performance. Stat Med 32:1467-82

Showing the most recent 10 out of 43 publications