There is a sharp and increasing imbalance between the number of children with autism in need of care and the availability of specialists certified to treat the disorder in its multi-faceted manifestations. The autism community faces a dual clinical challenge: how to direct scarce specialist resources to service the diverse array of phenomes and how to monitor and validate best practices in treatment. Clinicians must now look to solutions that scale in a decentralized fashion, placing data capture, remote monitoring, and therapy increasingly into the hands of families. Using artificial intelligence (AI) and large amounts of labeled human emotion computer vision data, we have developed a solution for automatic facial expression recognition that runs on Google Glasses and Android smartphones to deliver real time social cues to individuals with autism in the child?s natural environment. We hypothesize that this informatic system can provide real-time therapy in a way that scales to meet the demand of the growing population of autism families, including underserved minorities, while growing data that can be used to measure progress over time and in the development of novel AI.
Our first aim will focus on the development of a deep learning model that enables dynamic emotion recognition in the real world, and on domain adaptation procedures that enable minimal manual labeling to personalize the model for optimal accuracy on the individuals with whom the child will interact most regularly at home.
Our second aim will focus on the human computer interface, namely the design of the user experience with the Android application that controls the sessions run on the Google Glass wearable. We will work our clinical colleagues and with groups of autism families to develop and enhance a set of games and activity modes that create social engagements ideal for emotion therapy, including an emotion capture and a charades game.
The third aim will test our central hypothesis that the Glass system can create a therapy-to-data feedback loop that delivers clinical care while growing data for measurement and model development. We will work with up to 200 children ages 4-8 who have recent autism diagnoses and do not have access to standard behavioral therapy. We will build a community of autism families through crowdsourcing techniques, befitting the mobile paradigm embodied by our work, and through close collaboration with behavioral therapy providers, the autism outreach organization Autism Speaks, and the digital healthcare company, Cognoa. The families will work with us on design and refinement of our ?Superpower Glass? system for fit, engagement, and function of use for both therapy and data capture. Importantly, we will send units home with families to use the device for at least 3 twenty-minute sessions per week for a minimum of 6 weeks. This remote period will generate a massive database to quantify overall social learning, emotion comprehension, eye contact, and sustained social acuity. In all, our work program will show that mobile wearable AI can bring the social learning process out of the clinic and into the real world for faster and more adaptive intervention.
There is a sharp and growing imbalance between the number of clinical care providers available and the number of children with an autism diagnosis that leave most children without therapy until after critical periods in development have passed. We intend to address this problem through creation of a machine learning-enabled wearable that brings effective care to the home and empowers both parents, patients, and clinicians with mobile solutions that personalize care delivery to dramatically improve children?s outcomes. The Superpower Glass system, which delivers social cues to children during real-time interactions and provides several engagement modes for families, is a promising solution that enables greater access to care for families across the US, and potentially, across the globe.!