Goal-directed reaching: Allocentric target representations result in an offline mode of control

Abstract

Everyday activities such as copying, drawing, and imitative gestures require allocentric representations of space for successful movement completion. Notably, the top-down nature of allocentric spatial representations is thought to render motor output via a slow and offline mode of cognitive control mediated via visuoperceptual networks. The present investigation sought to test this hypothesis by providing detailed trajectory analyses of allocentric and target-directed reaching tasks performed with and without concomitant limb vision. Allocentric tasks required reaches to a location defined by the distance between a target and reference stimulus, whereas target-directed tasks required reaches to a target's veridical location. To examine the extent to which tasks were controlled via feedback-based trajectory amendments (i.e., online) or central planning mechanisms (i.e., offline), we computed the proportion of variance explained (i.e., R2) by the spatial position of the limb at 75% of movement time relative to each response's ultimate movement endpoint for distance and direction axes. Results showed that target-directed limb visible trials produced smaller R2 values and decreased endpoint variability compared to their limb occluded counterparts. In turn, the latter trial-type exhibited R2 values and endpoint variability commensurate with allocentric limb visible and occluded trials (which did not differ). Accordingly, we propose that the presence of limb vision in a target-directed task affords an online mode of control supported via 'fast' visuomotor networks. In contrast, the absence of limb vision or presence of allocentrically defined endpoints is proposed to render a primarily slow and offline mode of cognitive control mediated via visuoperceptual networks.

Acknowledgments: Natural Science and Engineering Council of Canada