Millions of Americans are receiving behavioral interventions for problematic alcohol use. In 2010, the Substance Abuse and Mental Health Services Administration (SAMHSA) documented over 1.8 million treatment episodes for drug and alcohol problems, many involving group or individual psychotherapy. The tremendous service-delivery need has focused research on optimal training methods, to promote the dissemination of evidence-based interventions. A recent meta-analysis of motivational interviewing (MI) shows that post-training supports - such as performance-based feedback or coaching - are critical for maintaining counselor skills following training. However, the practical implementation of performance-based feedback for alcohol use disorders (AUDs) and problematic drinking is currently prohibitive in effort, time, and money. There is a critical need or technology to scale up performance-based feedback to counselors for AUDs and problematic drinking. This competitive renewal builds on interdisciplinary research focused on automating the evaluation of MI fidelity for alcohol and substance use problems. This collaborative research brings together speech signal processing experts from electrical engineering and statistical text-mining and natural language processing experts from computer science with MI expert trainers and researchers. Our previous research laid a computational foundation for generating MI fidelity codes from semantic and vocal features, and the current proposal moves this work into direct clinical application. In collaboration with the University of Utah Counseling Center (UCC), we will develop and implement a clinical software support tool, the Counselor Observer Ratings Expert for MI (CORE-MI). The CORE-MI system will provide performance-based feedback focused on MI fidelity codes for training, supervision, and quality assurance for counselors treating clients struggling with alcohol and substance use problems. The research will use a hybrid implementation-effectiveness design to pursue the following three aims: 1) Implement and calibrate the CORE-MI system at the UCC clinic to provide automated, performance-based feedback on MI; 2) Compare counselor fidelity to MI and client alcohol and substance use outcomes, before and after initiation of the CORE-MI system (approximately, N = 2,400 sessions); and 3) Using machine learning tools, computationally explore mechanisms of MI using semantic and vocal data, MI fidelity codes, and client outcomes from approximately 3,000 sessions. The successful execution of this project will break the reliance on human judgment for providing performance-based feedback to MI and will massively expand the capacity to train, supervise, and provide quality assurance.

Public Health Relevance

Performance-based feedback is an effective method for training and supervising counseling approaches for alcohol-related problems. However, current feedback methods rely on human raters and are not feasible in the real-world due to time and cost. The current study will implement a clinical support software tool that uses speech signal processing and computational models - instead of human judgment - to evaluate motivational interviewing for alcohol and substance use problems.

Agency
National Institute of Health (NIH)
Institute
National Institute on Alcohol Abuse and Alcoholism (NIAAA)
Type
Research Project (R01)
Project #
5R01AA018673-07
Application #
9334680
Study Section
National Institute on Alcohol Abuse and Alcoholism Initial Review Group (AA)
Program Officer
Hagman, Brett Thomas
Project Start
2010-09-01
Project End
2021-08-31
Budget Start
2017-09-01
Budget End
2018-08-31
Support Year
7
Fiscal Year
2017
Total Cost
Indirect Cost
Name
University of Washington
Department
Psychiatry
Type
Schools of Medicine
DUNS #
605799469
City
Seattle
State
WA
Country
United States
Zip Code
98195
Hirsch, Tad; Soma, Christina; Merced, Kritzia et al. (2018) ""It's hard to argue with a computer:"" Investigating Psychotherapists' Attitudes towards Automated Evaluation. DIS (Des Interact Syst Conf) 2018:559-571
Caperton, Derek D; Atkins, David C; Imel, Zac E (2018) Rating motivational interviewing fidelity from thin slices. Psychol Addict Behav 32:434-441
Hallgren, Kevin A; Dembe, Aaron; Pace, Brian T et al. (2018) Variability in motivational interviewing adherence across sessions, providers, sites, and research contexts. J Subst Abuse Treat 84:30-41
Gupta, Rahul; Audhkhasi, Kartik; Jacokes, Zach et al. (2018) Modeling multiple time series annotations as noisy distortions of the ground truth: An Expectation-Maximization approach. IEEE Trans Affect Comput 9:76-89
Hirsch, Tad; Merced, Kritzia; Narayanan, Shrikanth et al. (2017) Designing Contestability: Interaction Design, Machine Learning, and Mental Health. DIS (Des Interact Syst Conf) 2017:95-99
Imel, Zac E; Caperton, Derek D; Tanana, Michael et al. (2017) Technology-enhanced human interaction in psychotherapy. J Couns Psychol 64:385-393
Pace, Brian T; Dembe, Aaron; Soma, Christina S et al. (2017) A multivariate meta-analysis of motivational interviewing process and outcome. Psychol Addict Behav 31:524-533
Gaut, Garren; Steyvers, Mark; Imel, Zac E et al. (2017) Content Coding of Psychotherapy Transcripts Using Labeled Topic Models. IEEE J Biomed Health Inform 21:476-487
Gupta, Rahul; Audhkhasi, Kartik; Lee, Sungbok et al. (2016) Detecting paralinguistic events in audio stream using context in features and probabilistic decisions. Comput Speech Lang 36:72-92
Xiao, Bo; Huang, Chewei; Imel, Zac E et al. (2016) A technology prototype system for rating therapist empathy from audio recordings in addiction counseling. PeerJ Comput Sci 2:

Showing the most recent 10 out of 31 publications