Although large parts of our brains are devoted to the processing of sound cues and sound plays an important role in the way we interface with the world, this rich channel has not been extensively exploited for displaying information. The mechanisms by which received sound waves are processed neurally to form objects with auditory properties in many perceptual dimensions, including three corresponding to the source location (range, azimuth, elevation) and three to qualities ascribed to the source (timbre, pitch and intensity), are beginning to be understood. There has been significant progress over the last decade in understanding the mechanisms by which acoustical cues arise and how the biological system performs transduction and neural processing to extract relevant features from sound, and in the way we perceive and organize objects in acoustical scenes. Our goal is to exploit this understanding, and uncover the scientific principles that govern the computerized rendering of artificial sound scenes containing multiple sound objects that are information and feature rich. We will test, use and extend this knowledge by creating auditory user interfaces for the visually impaired and the sighted. The work aims both at developing interfaces and answering fundamental questions such as: Is it possible to usefully map "X" to the auditory axes of a virtual auditory space? Here "X" could be an image (e.g., a face), a map, tabular data, uncertain data, or temporally varying data. Are there neural correlates that can guide natural mappings to acoustic cues? What limitations does our perception place on rendering hardware? How important is