Proprioceptive recalibration and uPDating predicted sensory consequences are neither exclusively implicit nor explicit

Abstract

Knowing where your limbs are is important for moving. This is informed by vision, proprioception, and prediction of sensory consequences based on efference copies. When visual feedback of hand is rotated during training, proprioception and prediction are adjusted towards the visual feedback. Here we test whether these changes are mainly implicit, hence decreasing with increased explicit information. All participants trained with a 30-degree rotated hand-cursor, and we manipulated explicit learning in three groups: 1) an "instructed" group given a strategy to counter the rotation, 2) a "cursor-jump" group that saw the cursor jump from 0-degree to 30-degree rotation on every trial, 3) an "implicit" control group, that received neither instructions nor different stimuli. During training, the instructed group countered the rotation immediately, while the other groups took longer (within 15-20 trials) to compensate for the cursor-rotation. Moreover, when including or excluding the strategy learned to counter the rotation, only the implicit group could not switch their strategy on or off at will, suggesting unawareness of their learning. Participants also localized their hand before and after training. They either moved their own hand, allowing hand localization with proprioception and efference-based predictions, or the robot moved their hand, providing only proprioceptive hand position. We found no differences between groups in either recalibrated proprioception or updated predictions. Since manipulating explicit learning clearly worked, our localization data suggests that neither proprioceptive recalibration nor updating of predictions fall on one side of the implicit-explicit learning divide, but are separate processes.