Over the course of the last decade, smart devices have become essential components of daily life. However, apart from conversational assistants, most smart devices and environments rely primarily on presenting information visually, a poor fit for many situations where users' visual attention is on something besides the device. One way to support these situations is to let people use sound or touch for interactions. However, adding separate microphones, loudspeakers, and vibration mechanisms using conventional means can impose extra costs and reduce the device’s durability and aesthetics. This project seeks to develop alternative technologies for recording and reproducing spatial audio through bending vibrations on flat surfaces ranging from smartphones to video walls. These vibrations can also allow the smart acoustic surface to serve as a touch interface. The advantage of this approach lies in the fact that the surface is already part of the device, allowing the device to maintain its durability and aesthetics while incorporating these new features. Adding spatial audio and haptic feedback to OLED displays, smartphones, and video walls will provide fundamental technology to improve people's ability to navigate complex data sets such as menus and maps, and enable a greater sense of immersion for remote applications such as video conference calls.
The project develops a framework for designing surfaces such as the display screen of a device to serve as acoustic and vibrotactile interfaces. The research objectives will address the following three challenges by employing the vibro-acoustics of extended surfaces: (1) spatial audio reproduction, (2) spatial audio capture, and (3) haptic feedback and touch sensing. The vibrations of the extended surface induced by acoustic and touch interactions with the structure can be detected using arrays of sensors distributed on the surface. Similarly, arrays of force actuators distributed on the surface will induce bending vibrations that cause the structure to radiate sound. Loudspeaker and microphone array processing methods will be adapted for the vibration actuators and sensors with applications including noise reduction, sound-field control, and beamforming. Machine learning techniques will be used to identify source-location features stemming from vibration patterns induced by acoustic and touch interactions with the structure. Acoustic recordings and scanning laser vibrometer measurements will be used to analyze the effectiveness of the framework in real-world settings.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.