Markers for treatment selection have the potential to improve patient outcomes and decrease medical costs. When a treatment benefits only a subset of patients, a marker that identifies these subjects could be used to spare others unnecessary treatment. If a therapy is particularly harmful to certain individuals, a useful marker would identify these subjects to avoid treating them. New technologies are producing an abundance of candidate markers. However, the standards for their evaluation, which are essential for making decisions regarding marker advancement and regulatory approval, are sorely lacking. This application proposes to contribute to the development of these standards.
Aim 1 ("Measures of Performance") demonstrates the inadequacy of the current approach to evaluating treatment selection markers, and develops three novel statistical measures of marker performance: (i) Marker-by-treatment predictiveness curves display the treatment effect at each marker value;ii) Treatment selection ROC curves show the accuracy with which the marker discriminates between individuals who do and do not benefit from treatment;and iii) The selection impact curve describes the population impact of using the marker to select treatment.
Aim 2 ("Comparing Markers") builds on this approach to develop methods for comparing the performance of two candidate markers. Comparisons at fixed and optimized marker thresholds, as well as global summaries of marker performance, are proposed.
Aim 3 ("Covariate-Specific Performance") develops an approach to evaluating how marker performance varies with factors such as patient characteristics or aspects of the marker measurement procedure. Because marker combinations are commonly sought with the hopes of improving performance, Aim 4 ("Combining Markers") develops an approach to combining multiple markers and evaluating the performance of the combination. This also leads to a method for assessing the increment in performance gained by adding a new marker to existing markers or clinical information.
Aim 5 ("Study Design") considers the implications of these new methods for study design. While the ideal design is a blinded and randomized trial where the marker is measured at baseline on all participants, careful selection of a subset of trial subjects in which to measure the marker (eg a nested case-control design) may yield similar efficiency. Approaches to the design of both of these types of studies, including power calculations and recommendations regarding matching and stratification, will be developed. Methods for evaluating markers in designs that measure the marker on a subset of trial participants will also be provided. This research will be conducted in collaboration with international leaders in the fields of marker evaluation and clinical trial design and analysis. The methods will be applied to several important intervention trials where markers have been measured for treatment selection.
Interventions for disease treatment and prevention can potentially be made more cost-effective by using markers to identify in advance the individuals most likely to benefit from the treatment, and thus avoid treating those unlikely to benefit. This proposal will develop methods to help realize this potential, by developing standards for evaluating candidate markers. These standards will help distinguish the good markers from the bad, optimize how the markers are used to select treatment, and ensure that research studies are designed so that the markers can be properly evaluated.
|Kerr, Kathleen F; Brown, Marshall D; Zhu, Kehao et al. (2016) Assessing the Clinical Impact of Risk Prediction Models With Decision Curves: Guidance for Correct Interpretation and Appropriate Use. J Clin Oncol 34:2534-40|
|Huang, Ying; Laber, Eric B; Janes, Holly (2015) Characterizing expected benefits of biomarkers in treatment selection. Biostatistics 16:383-99|
|Janes, Holly; Pepe, Margaret S; McShane, Lisa M et al. (2015) The Fundamental Difficulty With Evaluating the Accuracy of Biomarkers for Guiding Treatment. J Natl Cancer Inst 107:|
|Pepe, Margaret S; Fan, Jing; Feng, Ziding et al. (2015) The Net Reclassification Index (NRI): a Misleading Measure of Prediction Improvement Even with Independent Test Data Sets. Stat Biosci 7:282-295|
|Kang, Chaeryon; Huang, Ying; Miller, Christopher J (2015) A discrete-time survival model with random effects for designing and analyzing repeated low-dose challenge experiments. Biostatistics 16:295-310|
|Janes, Holly; Brown, Marshall D; Pepe, Margaret S (2015) Designing a study to evaluate the benefit of a biomarker for selecting patient treatment. Stat Med 34:3503-15|
|Kang, Chaeryon; Janes, Holly; Huang, Ying (2014) Combining biomarkers to optimize patient treatment recommendations. Biometrics 70:695-707|
|Kerr, Kathleen F; Wang, Zheyu; Janes, Holly et al. (2014) Net reclassification indices for evaluating risk prediction instruments: a critical review. Epidemiology 25:114-21|
|Janes, Holly; Brown, Marshall D; Huang, Ying et al. (2014) An approach to evaluating and comparing biomarkers for patient treatment selection. Int J Biostat 10:99-121|
|Janes, Holly; Pepe, Margaret S; Huang, Ying (2014) A framework for evaluating markers used to select patient treatment. Med Decis Making 34:159-67|
Showing the most recent 10 out of 24 publications