The potential for artificial intelligence applications, specifically machine learning, to prevent, predict, and help manage disease sparks immense hope not only for the individuals affected, but also for the overall health of populations. Particularly exciting examples of these novel computing strategies are increasingly found in the development of deep learning algorithms for medical use. Already embedded in our daily lives, algorithms have begun to impact human-decision making, from recruitment and hiring of employees to criminal sentencing. Outside of medicine, recognition of the ways algorithms may reflect, reproduce, and perpetuate bias has led to an explosion of theoretical and empirical research on the subject. There is an increasing awareness of potential algorithmic weaknesses, including some that raise concerns about fundamental issues of fairness, justice, and bias. The need to anticipate and address emerging ethical issues in algorithmic medicine is time- sensitive. As health care systems increasingly utilize algorithms for patient identification, diagnosis, and treatment direction, the consequences of algorithmic bias yield real and significant costs. Numerous stakeholders are responsible for the development, application and interpretation of algorithms in medicine, and yet there has been very little engagement of stakeholders most affected by these learning systems and tools. The overarching goal of this empirical and hypothesis driven project is to articulate the landscape of ethical concerns and the issues emerging in the context of the development, refinement, and application of machine learning in algorithmic medicine. First, we determine the distinct ethical issues and problems encountered in the development, refinement, and application of machine learning, by querying the perspectives of a diverse array of stakeholders involved?machine learning researchers, clinicians, ethicists, and patients. Using the new insights generated from the first half, we will conduct an evidence-based, information-sharing vignette survey to understand the impact of the contexts of algorithms on the ethically salient perspectives of physicians?those poised to implement such innovation in their own decision-making for the care of patients. Maximizing our established record of expertise in empirical ethics investigations, this sequence of projects leverages access to the exceptional machine learning research conducted at Stanford University, including work by NIH-funded investigators, and provides extensive, systematically collected data on ethical issues encountered and anticipated throughout the development and implementation of algorithms. Finally, the project develops and refines an evidence-informed information-sharing survey for use in better understanding how physicians react to intelligent systems.
Machine learning-driven algorithmic medicine now faces an urgent need to anticipate and address emerging ethical issues. For machine learning applications in algorithmic medicine, the failure to examine ethical issues from the perspective of stakeholders will inevitably limit the ecological validity and utility of the algorithms and threaten society's future embrace of these innovations. A hypothesis-driven, empirical study is needed to anticipate and address ethical concerns, and provide clinicians, machine learning researchers, policymakers, and the public with evidence to better enable ethical application and translation of algorithms in medicine.