Auditory experience is an integral part of our daily life. Our perception of sound affects how we interpret and respond to various events around us. Overall, interactive modeling and simulation of sound effects and auditory events can significantly enhance numerous scientific and engineering applications, and also support more intuitive human-computer interaction for desktop and mobile applications. It also offers an alternative means to visualize datasets with complex characteristics (multi-dimensional, abstract, conceptual, spatial-temporal, etc.). Yet despite the fact that hearing is one of our dominant senses, sound rendering has not received as much attention as visual rendering to better serve as an effective communication channel for human-computer systems, and interactive audio rendering still poses major computational challenges. In this project, the PI focuses on rendering of aural effects, with attention to a greater correlation between sound and visual rendering, to communicate information (events, spatial extent, physical setting, emotion, ambience, etc.) to a user in a virtual world and to thereby increase the user's sense of presence and spaciousness while improving his/her ability to locate sound sources. The PI's goal is to make radical advance in interactive sound rendering and application-specific auditory interaction techniques in order to achieve high-fidelity auditory interfaces for large-scale virtual reality. In particular, she will address the computational bottlenecks in example-guided, physics-based sound synthesis, develop new hybrid algorithms for creating realistic acoustic effects in complex, dynamic 3D virtual environments, demonstrate the techniques on acoustic walkthrough for a variety of applications, and evaluate the resulting auditory systems and their impact on target applications. The work will build upon the PI's prior accomplishments to make several major scientific advances that will significantly extend the state of the art in auditory displays and human-centric computing. Project outcomes will include new hybrid acoustic algorithms for realistic sound effects, novel example-guided physics-based sound synthesis, innovative applications of auditory displays, and better understanding of human auditory perception.

Broader Impacts: Applications of interactive sound rendering enabled by this project will span a wide variety of domains, include assistive technology for the visually impaired, multimodal human-centric interfaces, immersive teleconferencing, rapid prototyping of acoustic spaces for urban planning, structural design, and noise control. Project outcomes, including scientific advances and software systems, will be disseminated through websites, publications, workshops, community outreach, and other professional contacts. In addition to acoustic simulation, this research will ultimately offer fundamental scientific foundations for solving wave/sound propagation problems in complex domains for seismology, geophysics, meteorology, engineering design, urban planning, etc.

Agency
National Science Foundation (NSF)
Institute
Division of Information and Intelligent Systems (IIS)
Type
Standard Grant (Standard)
Application #
1320644
Program Officer
Ephraim Glinert
Project Start
Project End
Budget Start
2013-08-01
Budget End
2018-08-31
Support Year
Fiscal Year
2013
Total Cost
$499,991
Indirect Cost
Name
University of North Carolina Chapel Hill
Department
Type
DUNS #
City
Chapel Hill
State
NC
Country
United States
Zip Code
27599