The theory of the """"""""unity of the senses"""""""" states that some attributes of perceptual experience (such as intensity, pitch, and brightness) as well as some spatio-temporal features (such as spatial location and temporal pattern) are common to vibrotactile, auditory, and visual perception. The proposed research will first, delineate which attributes and features are common to touch and hearing, and to hearing and vision; second, determine how absolute are the cross-modal equivalences; and third assess how these common properties function in processing perceptual information. To do this we will employ three classes of procedure: (1) """"""""free"""""""" cross-modal natching, where subjects match, for instance, tactile to auditory and auditory to tactile stimuli; (2) cross-modal similarity scaling, where subjects rate the similarity/dissimilarity of, for instance, tactile to auditory as well as tactile to tactile and auditory to auditory stimuli; (3) information processing, where subjects have to identify, discriminate, or classify stimuli, as reaction time is measured. In discrimination, subjects discriminate stimuli in one modality, say two sounds, when these are accompanied by cross-modally matching or mismatching stimuli from another modality; in classification, subjects categories multimodally correlated, multimodally uncorrelated, and unimodal stimuli. From the results of the matching and similarity scaling tasks (1 and 2 above), it will be possible to make quantitative predictions as to the relative effectiveness, in information processing (3 above), of multimodal combinations of various attributes and features. In general it is predicted that greater cross-modal similarity will accrue to features vs. attributes, and to certain attributes compared to others.
Showing the most recent 10 out of 14 publications