AbstractAntipointing entails the top-down decoupling of the normally direct spatial relations between a stimulus and response and requires executing a movement mirror-symmetrical to the location of an exogenously presented target. Previous work (Heath et al. 2010: Exp Brain Res) has shown that antipointing is associated with longer reactions times (RT) and less accurate and more variable endpoints than responses with direct SR relations (i.e., propointing). The goal of the present study was to determine whether the aforementioned antipointing behavioural costs relate to the mediation of response planning and execution via the same visual information (i.e., relative) as that associated with perceptual judgments. In particular, we sought to determine whether antipointing elicits a visual compression of perceived target location. To that end, participants (N = 13) completed pro- and antipointing responses in separate blocks to briefly (50 ms) presented targets located 100, 120 and 140 mm left and right of a common home position. Notably, pro- and antipointing responses were completed in conditions wherein responses were directed to veridical target amplitude (i.e., veridical condition) as well as half (i.e., half condition) and double (i.e., double condition) veridical target location. As expected, antipointing responses produced longer RTs than propointing, and RTs for both pro- and anti-pointing tasks were less in the veridical than the half and double conditions. Moreover, pro- and antipointing amplitudes in the veridical and half conditions did not reliably differ (but were more variable in the latter); however, antipointing amplitudes in the double condition were reliably less than their propointing counterparts. As such, results for the double condition indicate that antipointing sensorimotor transformations are governed via the same visual information as that associated with perceptual judgments of distance. In other words, the visual information mediating antipointing is associated with a distance-related compression of visual space.
Acknowledgments: Supported by NSERC Discovery and USRA Grants.