Development of augmented intelligence (AI) models for predicting clinical outcomes is growing exponentially. Automated clinical surveillance to assist in early detection of in-hospital deterioration such as sepsis and acute kidney injury (AKI) is a promising AI application. As many as 300,000 US hospital patients die each year from problems like sepsis and AKI and 5% or more of these deaths are preventable. Many more patients suffer harm or additional costs as a sequelae to delayed response. Compared to traditional rule-based risk predictions, advanced AI models using methods such as machine learning demonstrate improved reliability of predicting sepsis and AKI. The effectiveness of these systems in practice will likely depend on how AI risk information is integrated into clinical workflow and technologies, yet we are not aware of research to design or evaluate effective in-hospital AI risk information presentation and user interaction. It is widely known that explainable AI is desirable, but what needs to be explained and how to do it effectively and efficiently is not known. There is a need to understand end user perspectives on the value of AI for specific clinical contexts. We will draw on our team's recent research on effective clinical display design, theoretical models of human-AI performance, application of human-AI design principles, and application of human-centered design methods to design and evaluate effective approaches to support timely response to sepsis and AKI risk. Our primary objectives are to: identify factors that influence clinicians' perceptions of AI usefulness, generate design principles for effective health risk surveillance human-AI interaction, and design human-AI user interfaces that meaningfully improve human-AI performance when responding to sepsis and AKI.
In Aim 1, we will develop a temporal reasoning AI model for predicting in-hospital development of sepsis and AKI. We will apply this model to retrospective patient data to serve as context for research activities. Using chart review, we will quantify realistic metrics of human-AI system performance that take into account whether the AI model would have predicted deterioration before the clinical team suspected or acted in response to the event.
In Aim 2, we will interview clinicians while reviewing temporal progression of a patient's change in condition over their stay, including AI generated risk information. We will gather qualitative data on factors that influence clinicians' perceptions of usefulness of AI information toward the goal of early identification of patient problems.
In Aim 3, we will conduct participatory design activities with clinicians to design effective human-centered AI display and interaction to support early response to in-hospital sepsis and AKI. Finally, in Aim 4, using simulated realistic patient care tasks and comparing to traditional patient information technologies, we will evaluate the impact of human-centered AI designs on human-AI performance. We will generate human-AI interaction design guidance for health risk surveillance. Our findings are expected to innovate design for human-AI interaction in electronic health records (EHRs) and health-care monitoring and communication technologies.
As many as 15,000 preventable deaths and many more preventable invasive procedures occur each year in US hospitals due to late detection of unexpected deterioration and emergencies such as cardiac arrest, respiratory arrest, sepsis, and bleeding. Advanced approaches to augmented intelligence (AI) through methods such as machine learning may help clinicians to identify and respond to problems early, but little is known about how to effectively present risk information and support interaction between AI and clinicians. The goals of this project are to: identify factors that impact the usefulness of AI, design human-centered AI approaches to reduce harm from late response to sepsis and acute kidney injury, and generate design principles to ensure that AI solutions can be used effectively by clinicians to improve patient care.