The increasing demand for in-home elderly care involves challenges that call for innovative solutions. As more older adults prefer to live in their own homes as they age, living alone may pose serious risks to those who have age-related problems such as reduced mobility, dementia, or other chronic diseases. This at-risk population needs regular visits from in-home healthcare services, which in turn creates pressure on the geriatric home healthcare industry. Home service robots offer a solution to this societal problem by facilitating smart aging-in-place. This project aims to solve a fundamental research problem critical to the application of service robots in complex home environments: human activity monitoring. By creating a bridge between environmental understanding and human behavior understanding, this project offers a new theory to realize sound-based monitoring of resident behaviors in realistic home environments. Such a human-aware capability frees home service robots to do their daily routine work, while being able to care for the resident more proactively and effectively. Sound-based human behavior understanding will greatly improve the capability and usability of home service robots, therefore accelerating their adoption in human daily life. This project also incorporates education and outreach activities to stimulate prospective and current college students to pursue degrees and careers in science and engineering, attract underrepresented minority students to these research activities, and to disseminate new, useful datasets to the research community and home healthcare industry to promote continued advances in this area.

This project investigates a new theoretical framework for human activity monitoring in home environments, which takes advantage of deep learning while considering the locational context, thereby greatly improving the accuracy of human behavior understanding. The target framework is intended for broader application to similar deep learning-based machine perception problems. The project aims to establish a novel visual-acoustic semantic map (VASM) to connect environmental understanding and behavior understanding. Constructed through robotic semantic mapping and voice-based human-robot interaction, the VASM concept extends traditional visual semantic maps by incorporating rich acoustic information in the environment. When cloud-connected and scaled up to a large number of robots, this approach is expected to provide an effective and distributed solution to constructing a large dataset with annotated home event sounds. That dataset will then be used to train deep neural networks for sound event recognition. The project also develops a multi-sensor fusion approach to combining sound data with distributed motion sensor data to solve the problem of human activity recognition without using visual sensors. Such an approach overcomes the shortcomings associated with vision sensors and offers a fundamentally different solution to human activity monitoring. Finally, the planned theoretical framework will be verified and evaluated through experiments in a robot-integrated smart home.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

Project Start
Project End
Budget Start
2019-10-01
Budget End
2022-09-30
Support Year
Fiscal Year
2019
Total Cost
$488,692
Indirect Cost
Name
Oklahoma State University
Department
Type
DUNS #
City
Stillwater
State
OK
Country
United States
Zip Code
74078