With ease human observers can recognize and identify familiar faces as well as extract additional information from both familiar and unfamiliar faces, including the sex, approximate age, race, and current emotional state of the person. Nevertheless, faces pose challenging computational problems for the perceiver. They are highly similar to one another, containing the same features arranged in roughly the same configuration. Perceivers must, therefore, be able to encode very subtle variations in the form and configuration of facial features. We develop a quantifiable theory of the perceptual information in faces and model the learning of this information. Faces are represented using """"""""features"""""""" derived from the statistical structure of a set of learned faces, and the information most useful for discriminating among faces emerges as an optimal code. Our theory is implemented as a computational autoassociative memory (computer simulation) that operates on image-based codings of faces. The memory represents faces as a weighted sum of the eigenvectors (principal components, """"""""features"""""""") of a covariance matrix of learned face images; these facial features may be displayed visually and are useful for both face recognition and visually-derived semantic categorizations of faces. We believe many face processing tasks and empirical phenomena are constrained more by perceptual factors than by complicated cognitive and semantic ones. Hence, our primary goal is to determine the extent to which perceptual constraints alone can account for these tasks and phenomena. As it is beyond the scope of the present proposal to examine all such phenomena, we have chosen a diverse subset. Our strategy in each case will be (a) to relate model-predicted accuracy and facial characteristic ratings to human measures of the same at the level of individual faces and (b) to alter face images synthetically so as to alter accuracy or ratings in predictable ways for human observers viewing the same set of faces processed by the autoassociative memory. We will address three issues: (a) typicality --more typical faces are less well recognized; (b) the perception of the sex of faces -- we model the structural differences between male and female faces and relate them to human ratings/performance using sex-linked facial characteristics; (c) the quantification and perception of the age of a face. Finally, we will analyze the eigenvectors in basic visual processing terms and compare the quality of face representations that emerge from principal components analysis as a function of spatial scale.