Using Advanced Driving Simulation and Vibrotactile Cues to Train Drivers to Interact with Next-Generation Autonomous Vehicles
Abstract
There are six levels of vehicle automation, from Level 0 – no automation to Level 5, fully
autonomous [1]. According to several projections, the majority of vehicles on the road will be at
intermediate levels for the next several years, meaning that vehicle-to-human takeover will be
required in cases where the systems can no longer function due to design limitations, such as under
poor weather conditions or in a construction zone [2], [3]. As shown in Fig. 1, the takeover process
consists of signal response and post-takeover phases, which involves multiple steps, including
perceiving the takeover requests (TOR), moving hands and foot to prepare for manually
controlling the vehicle, asesssing information in the driving environment, strategizing
maneuvering plans, and executing actions. The process generally lasts a few seconds and could be
very challenging if drivers are engaged in non-driving-related tasks [4] that utilize visual and
auditory resources that results in them being out-of-the-loop. In this case, Multiple Resource
Theory (MRT) [5] posits that drivers’ ability to process critical warning information, i.e., a TOR
presented via the visual/auditory channels, may be negatively impacted. Therefore, information in
the driving environment, that should be acknowledged by drivers, could be conveyed in a more
available sensory channel, i.e., the tactile modality. However, given that information presented in
the tactile channel can appear in many (complex) formats with different associated meanings [6],
[7], it is critical to assess drivers’ ability to comprehend meaningful tactile patterns and determine
its effectiveness on drivers’ takeover performance.