DeepTake: Prediction of Driver Takeover Behavior using Multimodal Data

Erfan Pakdamanian

Shili Sheng

Sonia Baee

Lu Feng

Seongkook Heo

Sarit Kraus

Automated vehicles promise a future where drivers can engage in non-driving tasks without hands on the steering wheels for a prolonged period. Nevertheless, automated vehicles may still need to occasionally hand the control back to drivers due to technology limitations and legal requirements. While some systems determine the need for driver takeover using driver context and road condition to initiate a takeover request, studies show that the driver may not react to it. We present DeepTake, a novel deep neural network-based framework that predicts multiple aspects of takeover behavior to ensure that the driver is able to safely take over the control when engaged in non-driving tasks. Using features from vehicle data driver biometrics, and subjective measurements, DeepTake predicts the driver’s intention, time, and quality of takeover. We evaluate DeepTake performance using multiple evaluation metrics. Results show that DeepTake reliably predicts the takeover intention, time, and quality, with an accuracy of 96%, 93%, and 83%, respectively. Results also indicate that DeepTake outperforms previous state-of-the-art methods on predicting driver takeover time and quality. Our findings have implications for the algorithm development of driver monitoring and state detection.

The simulator records driver control actions and vehicle states with a sampling frequency of 20Hz and sent the captured data through our developed API using iMotions software. The simulated driving environments along with the tasks were created using PreScan Simulation Platform.

This publication uses Eye Tracking and GSR which is fully integrated into iMotions Lab

Learn more