“Blink and You’ll Miss It”: Visual Hand Feedback Reveals When the Vergence-Accommodation Conflict Disrupts Aiming in VR

Abstract

Virtual reality (VR) systems using head-mounted displays combine stereoscopic displays and position tracking to simulate the perceptual experiences and movements in a physical environment. This mediated experience, however, can introduce perceptual mistranslations that impact movement. Our previous research suggests that one mistranslation, depth compression, that can lead to undershooting during visually guided aiming. The present study was designed to explore the impact of vision on the spatiotemporal characteristics of visually-guided VR movements. Participants (n=20) performed pointing movements to targets at one of four distances in an immersive VR environment. Their virtual hand was either always visible (online guidance) or hidden at 350, 200, 100, 50, or 0ms after movement onset to reduce online visual feedback. Overall, constant error (CE) was smallest in the online guidance and 350ms conditions and there was overshooting in all other condition. Interestingly, the impact of depth compression was only revealed in the online guidance and 350ms conditions wherein undershooting increased as target distance increased. Trajectory analyses indicated that undershooting emerged after peak deceleration and depended on visual feedback available between peak velocity and peak deceleration (i.e., in the online and 350ms conditions). These findings suggest that movement undershooting in VR results from the depth compression in VR impacting online control more than planning mechanisms. Further, comparisons to previous literature on movements in the physical environment wherein ~100ms of vision is sufficient for online control, the current study indicates that visually-guided movements in VR need a longer time window of vision for online control.