This project is designed to determine how parameters of acoustical signals are represented in the auditory cortex. It is known that damage to the auditory cortex can lead to large deficits in auditory localization, temporal processing, signal identification and speech processing. Understanding the representation of sounds in the cortex should help to treat diseases that damage the auditory cortex (such as strokes, epilepsy, tumors, and trauma) and to understand why these deficits occur. The proposal is specifically designed to study how cells in the primary auditory cortex (AI) and the posterior auditory field (PAF) of the cat integrate spectral and temporal information. These two fields have been chosen because of evidence that AI and PAF are part of distinct parallel information pathways in auditory cortex (Rouiller et al. 1991). We will test the hypothesis that spectral and temporal analysis are segregated along these pathways. For this proposal, the single neuron responses to tone pairs will be compared between ventral AI (AIv), dorsal AI (AId) and PAF. The proposed project's goals are (l) to characterize any physiologically-based subdivisions of PAF, based on multiple unit mapping of intensity and frequency tuning (2) determine functional segregation of analysis of temporal parameters related to scene analysis, (3) to determine the functional segregation of analysis of intensity parameters related to scene analysis, and (4) to determine potential differences in classes of sounds that might be analyzed by AId, AIv, and PAF cells. Preliminary evidence from three laboratories has demonstrated that understanding the second order nonlinearity is critical to solving how the brain analyzes sounds (Nelken et al. 1991; Shamma et al. 1993: Calhoun and Schreiner 1993). Pure tone responses are poor predictors of the response to complex sounds in part because of strong inhibition and facilitation not seen in cells' spike outputs to pure tones. The results from the proposed experiments will (l) characterize topographical maps in PAF beyond known tonotopy, (2) characterize the latency and duration of two-tone inhibition and facilitation in these subdivisions, (3) extensively characterize intensity analysis of spectral elements of complex stimuli by AIv, AId, and PAF neurons and (4) extensively characterize second order frequency tuning curves in these subdivisions. These results will aid in defining and characterizing the segregation of auditory cortical function with respect to multiple information bearing parameters of sound, and will be crucial for understanding how the cortex uses this information for auditory scene analysis.