Effects of immersive visual environment-change cues on motor learning during a virtual-reality target hitting task

Abstract

When performing motor tasks, we improve performance by modifying future movements to correct for observed errors. We do so by updating existing internal models of the movement interaction, or by creating, switching to, and switching from new internal models. Assignment of the error’s source, termed error attribution, can impact whether we update existing or create new internal models. Since the cause of an error is often ambiguous, sensory cues can be used to estimate the likely source of the error. In a target-hitting task, participants made arm movements to roll a ball to targets in a virtual reality environment. To facilitate motor adaptation, we induced errors by either modifying the mapping between the arm movement and the initial movement of the ball, or by applying a constant acceleration to the ball only after release. When adapting to either type of error, we explored if informative visual-slant cues successfully facilitate model creation and switching, rather than model updating. We find that the error induction method alone, and not the visual cues, determined whether errors led to model creation and switching, or model updating. In follow-up experiments, we find updates to internal models during this task account for errors assigned to the hand used in the movement, as well as the physical properties of the environment on which the interaction occurs. That is, the internal models being updated are not purely models for the control of limb movement, but interaction models.