Human observers localize events in the world by using sensory signals from multiple modalities. We evaluated two theories of spatial localization that predict how visual and auditory information are weighted when these signals specify different locations in space. According to one theory (visual capture), the signal that is typically most reliable dominates in a winner-take-all competition, whereas the other theory (maximum-likelihood estimation) proposes that perceptual judgments are based on a weighted average of the sensory signals in proportion to each signal’s relative reliability. Our results indicate that both theories are partially correct, in that relative signal reliability significantly altered judgments of spatial location, but these judgments were also characterized by an overall bias to rely on visual over auditory information. These results have important implications for the development of cue integration and for neural plasticity in the adult brain that enables humans to optimally integrate multimodal information.
© 2003 Optical Society of America
(330.0330) Vision, color, and visual optics : Vision, color, and visual optics
(330.1400) Vision, color, and visual optics : Vision - binocular and stereopsis
(330.4060) Vision, color, and visual optics : Vision modeling
(330.7320) Vision, color, and visual optics : Vision adaptation
Peter W. Battaglia, Robert A. Jacobs, and Richard N. Aslin, "Bayesian integration of visual and auditory signals for spatial localization," J. Opt. Soc. Am. A 20, 1391-1397 (2003)