The growing power and flexibility of information technology has allowed developers to build increasingly creative tools; however that same capacity has often led to tools whose use is beyond any but the most expert power-user. One area of creative endeavor of great cultural and economic importance is music. In recent decades, a vast variety of audio production tools have been introduced to enhance and facilitate music creation. Unfortunately, since the tools come from software developers more familiar with Java than musical creativity, many of these tools are extremely complex and use concepts and controllers very different from those that musicians are accustomed to and comfortable with. Since the tools are complex, and interfaces to software suites change every few years, musicians may fail to embrace them or only ?half- learn? them, inhibiting creativity and limiting the range of possible works produced, and the ultimate capacity that powerful technology has to enhance musical creation. Many musicians think about sound in individualistic terms that may not have known mappings onto the controls of existing audio production tools. For example, a violinist may want to make the recording of her violin sound shimmery. While she has a clear concept of what a shimmery sound is, she may not know how to articulate it in terms that let a producer map shimmery onto the available audio tools (such as reverberation and equalization). This project develops a computational tool that works alongside the musician to quickly learn how acoustic features map onto an audio concept, and creates a simple controller to manipulate audio in terms of that concept. In the case of the violin player, the tool would learn what shimmery means to her, and then create a knob that would let her make a sound more or less shimmery. This project uses a user-centered design approach to the development of audio production tools that automatically adapts to the user?s work style, rather than forcing the user to adapt to the tools. The result will be and example of how audio tools can enhance creativity, and more broadly, how affective concepts can be automatically mapped to more discrete parameters associated with digital media production.

Agency
National Science Foundation (NSF)
Institute
Division of Information and Intelligent Systems (IIS)
Type
Standard Grant (Standard)
Application #
0757544
Program Officer
Ephraim P. Glinert
Project Start
Project End
Budget Start
2008-06-15
Budget End
2012-05-31
Support Year
Fiscal Year
2007
Total Cost
$166,000
Indirect Cost
Name
Northwestern University at Chicago
Department
Type
DUNS #
City
Evanston
State
IL
Country
United States
Zip Code
60201