AbstractThe present investigation sought to compare the resolution with which visual and tactile feedback systems regulate object size for grasping (i.e., action) and manual estimation (i.e., perceptual) tasks. For all trials, participants placed their right (i.e., grasping) limb on a start location positioned 200 mm to the right of their midline, while their left (i.e., non-grasping) supinated palm was positioned 200 mm to the left of their midline and in the same transverse plane as their grasping limb. For the visual modality, the target object was placed on a raised platform 780 mm above the non-grasping limb and participants were instructed to grasp or manually estimate the target while being provided continuous visual feedback. For the tactile modality, the target object was placed on the palm of the non-grasping limb, providing continuous tactile feedback, and participants were again instructed to grasp or manually estimate the target. Results for both modalities showed that peak grip aperture (i.e., grasping tasks) and grip aperture (i.e., manual estimation tasks) produced equivalent scaling to target size. In other words, mean aperture values elicited comparable size resolution for visual and tactile modalities. In contrast, grasping and manual estimation tasks in the tactile modality produced larger just-noticeable-difference (JND) scores than their visual modality counterparts. Indeed, such results provide evidence that the sensorimotor transformations underlying the integration of tactile feedback for action and perceptual processes are associated with greater neural noise than their visual counterparts.
Acknowledgments: Supported by the Natural Sciences and Engineering Research Council of Canada Discovery Grant