The proposed research study will identify changes in acoustic speech parameters, using innovative cell phone based technology, in order to predict clinically significant mood state transitions in individuals with bipolar disorder. The central hypothesis is that there are quantitative changes in acoustic speech patterns that occur in advance of clinically observed mood changes. These changes is speech patterns can be identified using computational methods over longitudinal monitoring of ecologically gathered voice data that requires minimal input from the individual being observed. These computationally determined changes are imperceptible to human observation but are hypothesized to predict clinically significant mood transitions. To test this hypothesis we will study 50 rapid cycling individuals with bipolar I and II disorder and 10 healthy controls for 6 months by recording their acoustic characteristics of speech (not lexical content) while using a mobile """"""""smart- phone"""""""". In this manner we are gathering data free of observer bias. We will also gather weekly clinical assessments with standardized instruments (Hamilton Depression Rating Scale and Young Mania Rating Scale) in which we will record their physical voice patterns as well. Bipolar disorder is an ideal disorder for the initial study of speech patterns in the assessment of psychopathology. It is an illness with pathological disruptions of emotion, cognitive and motor capacity. There is a periodicity of the illness pattern that oscillates between manic energized states with charged emotions and pressured rapid speech to depressed emotional phases with retarded movements and inhibited quality and quantity of speech. The successful management of patients with bipolar disorder requires ongoing clinical monitoring of mental states. Currently there are few technologies that address the challenge of monitoring individuals long-term in an ecological manner. Speech pattern recognition technology would allow for unobtrusive monitoring that can be seamlessly integrated into daily routine of mobile phone usage to predict future changes in illness states. The proposed study tests a highly innovative approach by developing a practical solution to assist in the longitudinal management of bipolar patients. Computational algorithms of analyzed speech patterns will use statistic (Gausian Mixture Models and Support Vector Machines) and dynamic (Hidden Markov Models) modeling. This project has the potential of transformative advances in the management of psychiatric disease, as speech patterns, and changes therein, are highly likely to be reflective of current and emerging psychopathology. If successful this technology will provide for the prioritization of patients for medical and psychiatric care based on computational detection of change patterns in voice and speech before they are clinically observable.
This is a study that detects measurable changes in speech patterns using computer-based analyses and correlates these changes with pathological variation in clinically assessed mood states. Changes in speech patterns are likely to precede and predict clinically significant mood state changes (to mania or depression). The overall goal is to use computational methods for the early detection of mood changes that will provide the opportunity for early clinical intervention.
|Cochran, Amy L; Schultz, André; McInnis, Melvin G et al. (2018) Testing frameworks for personalizing bipolar disorder. Transl Psychiatry 8:36|
|McInnis, Melvin G; Assari, Shervin; Kamali, Masoud et al. (2018) Cohort Profile: The Heinz C. Prechter Longitudinal Study of Bipolar Disorder. Int J Epidemiol 47:28-28n|
|Gideon, John; Provost, Emily Mower; McInnis, Melvin (2016) MOOD STATE PREDICTION FROM SPEECH OF VARYING ACOUSTIC QUALITY FOR INDIVIDUALS WITH BIPOLAR DISORDER. Proc IEEE Int Conf Acoust Speech Signal Process 2016:2359-2363|
|Karam, Zahi N; Provost, Emily Mower; Singh, Satinder et al. (2014) ECOLOGICALLY VALID LONG-TERM MOOD MONITORING OF INDIVIDUALS WITH BIPOLAR DISORDER USING SPEECH. Proc IEEE Int Conf Acoust Speech Signal Process 2014:4858-4862|