Markers for treatment selection have the potential to improve patient outcomes and decrease medical costs. When a treatment benefits only a subset of patients, a marker that identifies these subjects could be used to spare others unnecessary treatment. If a therapy is particularly harmful to certain individuals, a useful marker would identify these subjects to avoid treating them. New technologies are producing an abundance of candidate markers. However, the standards for their evaluation, which are essential for making decisions regarding marker advancement and regulatory approval, are sorely lacking. This application proposes to contribute to the development of these standards.
Aim 1 (""""""""Measures of Performance"""""""") demonstrates the inadequacy of the current approach to evaluating treatment selection markers, and develops three novel statistical measures of marker performance: (i) Marker-by-treatment predictiveness curves display the treatment effect at each marker value;ii) Treatment selection ROC curves show the accuracy with which the marker discriminates between individuals who do and do not benefit from treatment;and iii) The selection impact curve describes the population impact of using the marker to select treatment.
Aim 2 (""""""""Comparing Markers"""""""") builds on this approach to develop methods for comparing the performance of two candidate markers. Comparisons at fixed and optimized marker thresholds, as well as global summaries of marker performance, are proposed.
Aim 3 (""""""""Covariate-Specific Performance"""""""") develops an approach to evaluating how marker performance varies with factors such as patient characteristics or aspects of the marker measurement procedure. Because marker combinations are commonly sought with the hopes of improving performance, Aim 4 (""""""""Combining Markers"""""""") develops an approach to combining multiple markers and evaluating the performance of the combination. This also leads to a method for assessing the increment in performance gained by adding a new marker to existing markers or clinical information.
Aim 5 (""""""""Study Design"""""""") considers the implications of these new methods for study design. While the ideal design is a blinded and randomized trial where the marker is measured at baseline on all participants, careful selection of a subset of trial subjects in which to measure the marker (eg a nested case-control design) may yield similar efficiency. Approaches to the design of both of these types of studies, including power calculations and recommendations regarding matching and stratification, will be developed. Methods for evaluating markers in designs that measure the marker on a subset of trial participants will also be provided. This research will be conducted in collaboration with international leaders in the fields of marker evaluation and clinical trial design and analysis. The methods will be applied to several important intervention trials where markers have been measured for treatment selection.
Interventions for disease treatment and prevention can potentially be made more cost-effective by using markers to identify in advance the individuals most likely to benefit from the treatment, and thus avoid treating those unlikely to benefit. This proposal will develop methods to help realize this potential, by developing standards for evaluating candidate markers. These standards will help distinguish the good markers from the bad, optimize how the markers are used to select treatment, and ensure that research studies are designed so that the markers can be properly evaluated.
|Janes, Holly; Brown, Marshall D; Crager, Michael R et al. (2017) Adjusting for covariates in evaluating markers for selecting treatment, with application to guiding chemotherapy for treating estrogen-receptor-positive, node-positive breast cancer. Contemp Clin Trials 63:30-39|
|Kerr, Kathleen F; LeBlanc, Michael; Janes, Holly (2017) Comparisons of cancer staging systems should be based on overall performance in the population. Clin Trials 14:659-660|
|Kerr, Kathleen F; Janes, Holly (2017) First things first: risk model performance metrics should reflect the clinical application. Stat Med 36:4503-4508|
|Kerr, Kathleen F; Brown, Marshall; Janes, Holly (2017) Reply to A.J. Vickers et al. J Clin Oncol 35:473-475|
|Dai, James Y; Liang, C Jason; LeBlanc, Michael et al. (2017) Case-only approach to identifying markers predicting treatment effects on the relative risk scale. Biometrics :|
|Pepe, Margaret S; Janes, Holly; Li, Christopher I et al. (2016) Early-Phase Studies of Biomarkers: What Target Sensitivity and Specificity Values Might Confer Clinical Utility? Clin Chem 62:737-42|
|Kerr, Kathleen F; Brown, Marshall D; Zhu, Kehao et al. (2016) Assessing the Clinical Impact of Risk Prediction Models With Decision Curves: Guidance for Correct Interpretation and Appropriate Use. J Clin Oncol 34:2534-40|
|Kang, Chaeryon; Huang, Ying; Miller, Christopher J (2015) A discrete-time survival model with random effects for designing and analyzing repeated low-dose challenge experiments. Biostatistics 16:295-310|
|Pepe, Margaret S; Fan, Jing; Feng, Ziding et al. (2015) The Net Reclassification Index (NRI): a Misleading Measure of Prediction Improvement Even with Independent Test Data Sets. Stat Biosci 7:282-295|
|Janes, Holly; Pepe, Margaret S; McShane, Lisa M et al. (2015) The Fundamental Difficulty With Evaluating the Accuracy of Biomarkers for Guiding Treatment. J Natl Cancer Inst 107:|
Showing the most recent 10 out of 30 publications