AbstractBoth sensory feedback and efference are deemed important for the detection and correction of movement errors (Wolpert & Ghahramani, 2000). According to the multiple processes model of limb control (Elliott et al., 2010), the earliest phase of an ongoing movement (i.e., impulse regulation) relies on both efference and sensory feedback, whereas the latter phase (i.e., limb-target regulation) requires only vision and proprioception for online trajectory amendments. The purpose of this study was to investigate the contributions of vision, proprioception and efference to endpoint error detection during manual aiming. Participants were asked to judge if their movement undershot or overshot a target. Visual information about the to-be-judged movement was limited to a brief window (40 ms) early in the trajectory (a critical time for making endpoint error judgements while also limiting terminal feedback). The judgement task was performed under three different conditions: 1) active reach (vision + proprioception + efference); 2) passive guidance using a robotic arm (vision + proprioception); and, 3) the observation of a fake hand guided by the robotic arm (vision only). To maintain consistency in the trajectories among active and passive conditions, trajectories from the active reach condition were replayed during the passive guidance and observation conditions. Endpoint error judgements were more accurate in the active reach condition in comparison to both robot-guided conditions. Furthermore, participants were better in the passive guidance condition compared to the observation condition. Thus, online error detection processes may not only rely on vision and proprioception, but also on efference.
Acknowledgments: Natural Sciences and Engineering Research Council of Canada (NSERC), University of Toronto (UofT), Canada Foundation for Innovation (CFI), Ontario Research Fund (ORF), Mitacs