AbstractGiven that attentional resources are limited, coupled with the abundance of incoming sensory information, humans must learn how to focus on task-relevant information while ignoring irrelevant stimuli. Multisensory integration (MSI) combines sensory information from different modalities such that multimodal stimuli are responded to more quickly and accurately compared to unimodal stimuli. The current experiment used audiovisual Stroop stimuli to determine how competing and/or supporting sensory information are processed in a two-choice goal-directed reaching task. Congruent, incongruent, or neutral audiovisual Stroop stimuli were presented for 500ms, followed by two colour coded targets. Participants were instructed whether to respond to the written word, or the word verbalization. Movement trajectories were recorded using three-dimensional motion capture. Given the spatial nature of the task, we predicted incongruent visual information would be more detrimental to task performance compared to incongruent auditory information. We also predicted reaction and movement time would be shortest in the respond visual condition, with earlier trajectory deviations. A 2 response modality x 3 congruency repeated measures ANOVA found that response modality influenced trajectory deviations, where responses in the respond-visual condition showed earlier deviations, especially when the written word was congruent with the auditory stimulus. Auditory-neutral trials led to significantly longer reaction times suggesting that auditory stimuli are processed regardless of task instructions. Thus, we cannot assume that humans can choose to ignore extraneous auditory information. By examining how congruent and incongruent stimuli influence nonconscious limb control, these results can help inform the design of new technology interfaces to facilitate human performance.
Acknowledgments: I would like to thank NSERC for funding this research.