Sensory integration during goal directed reaches: The effects of manipulating target availability

Abstract

When using visual and proprioceptive information to plan a reach, it has been proposed that the brain combines these cues to estimate the object and/or limb's location. Specifically, according to the maximum-likelihood estimation (MLE) model, more reliable sensory inputs are assigned a greater weight (Ernst & Banks, 2002). In this research we examined if it is possible for the brain to adjust which sensory cue it weights the most. Particularly, we asked if the brain changes how it weights sensory information when the availability of a visual cue is manipulated. Twelve healthy subjects reached to visual (V), proprioceptive (P), or visual + proprioceptive (VP) targets under different visual delay conditions (e.g. on V and VP trials, the visual target was available for the entire reach, it was removed with the go-signal or 1, 2 or 5 seconds before the go-signal). To establish which sensory cue the brain weighted the most, we compared endpoint positions achieved on V and P reaches to VP reaches. Results indicated that subjects weighted sensory cues in accordance with the MLE model across all delay conditions and that these weights were similar regardless of the visual delay. Moreover, while errors increased with longer visual delays, there was no change in reaching variance. Thus, manipulating the visual environment was not enough to change subjects' weighting strategy, further indicating that sensory information is integrated in accordance with the reliability of a sensory cue.

Acknowledgments: Research support: Natural Sciences and Engineering Research Council (EKC)