AbstractPantomime-grasping entails a movement with dissociated stimulus-response relations or responses towards the location previously occupied by a physical target objects. Unlike naturalistic grasps that are mediated via a target's absolute visual properties and specified in egocentric reference frames, relative and allocentric visual cues support pantomime-grasping. Notably, however, our group recently demonstrated that providing haptic feedback (i.e., of a physically removed target) following a pantomime-grasp supports an absolute visuo-haptic calibration (Davarpanah Jazi et al 2015: Exp. Brain Res.). In the current study we examined specific sensory and spatial requirements necessary to support a visuo-haptic calibration during pantomime-grasping. To that end, in a series of experiments participants pantomime-grasped differently-sized target objects while receiving haptic feedback (i.e., through physical touch) following response completion. Notably, the target's spatial location as well as online limb and target vision were manipulated. Results showed that an absolute visuo-haptic calibration process is limited to situations wherein a spatially-overlapping pantomime-grasp is performed following a visually-based memory delay. In accounting for our results we have drawn upon the maximum likelihood estimator model's (MLE) tenet that multisensory cues integrate in an optimal fashion with processing weighted towards the more reliable sense. As such, we propose that the decay of visual cues renders an increased weighting and salience of haptic signals and thus, supports an absolute visuo-haptic calibration.
Acknowledgments: Natural Sciences and Engineering Research Council