The United States is a world-leader in software and in multimedia content (e.g. music, film). To remain so, we must continually raise the bar in both software and media production. Software tools for media production (e.g. the audio production suite Protools) often have complex interfaces, conceptualized in ways that makes it difficult for any but the most expert to realize the power of these tools. Complex interfaces and steep learning curves can discourage creative people from doing their best work with such tools. Here, we focus on audio production tools. We propose a user-centered approach to remove the great disconnect between existing audio production tools and the conceptual frameworks within which many people work, both expert musicians and the broader public. The tools we develop will automatically adapt to the user's conceptual framework, rather than forcing the user to adapt to the tools. Where appropriate, the tools will speed and enhance their adaptation using active learning informed by interaction with previous users (transfer learning). The tools will also automatically build a crowdsourced audio concept map. This will help provide facilities for computer-aided, directed learning, so that tool users can expand their conceptual frameworks and abilities. By letting people manipulate audio on their own terms and enhancing their knowledge of such tools with directed learning, we expect to transform the interaction experience, making the computer a device that supports and enhances creativity, rather than an obstacle.

This work will have a number of broader impacts. The tools developed will be directly usable by practicing musicians and will also facilitate learning and creativity for the general public. These techniques will also be applicable to personalization of hearing aids and new diagnostic systems for audiologists. Our approach to tool personalization is core work in human-computer interaction and should generalize to other creative activities (e.g. image manipulation). Resulting advances in active and transfer learning will be of great value to machine learning researchers. Finding the relationships between quantifiable parameters of audio and the language and metaphors used by practicing musicians to describe sound is central to this work. This is of great interest to cognitive scientists, linguists, artificial intelligence researchers, and engineers. Concept maps for audio terms should also prove useful for machine translation. Broad application of techniques to map human descriptive terms on to machine-manipulable parameters will change expectations for both artists and scientists. Artists will be able to explore new lines of creativity that currently require significant investments of time in vastly disparate fields (e.g. signal processing and painting). This has the potential to transform information science and lead to new cognitive models of creativity, forming the basis for new approaches to education and research in both technology and in art.

Agency
National Science Foundation (NSF)
Institute
Division of Information and Intelligent Systems (IIS)
Type
Standard Grant (Standard)
Application #
1116384
Program Officer
Ephraim P. Glinert
Project Start
Project End
Budget Start
2011-09-01
Budget End
2014-08-31
Support Year
Fiscal Year
2011
Total Cost
$499,804
Indirect Cost
Name
Northwestern University at Chicago
Department
Type
DUNS #
City
Evanston
State
IL
Country
United States
Zip Code
60201