Investigating the Mitigation of Stress in Autonomous and Non-autonomous Vehicles Using LLM Feedback

Alva Markelius

Yue Lou

Martyna Galazka

Sofia Lundgren

Raimondas Zemblys

Henrik Lind

Robert Lowe

As many as 1.3 million people worldwide die each year as a result of road traffic accidents (WHO). A means for mitigating risks is the use of Driver Monitoring Systems (DMS) for evaluating driver state. Such systems can monitor distraction, drowsiness, stress, affective state, general cognitive impairment as well as behaviours that indicate potential for accidents. The integration of Large Language Models (LLMs) into vehicles has become an emerging area of innovation. These applications have the potential to enhance situational awareness, provide real-time feedback and assist in decision-making processes in both manually driven vehicles (MD) and autonomous vehicles (AV). The aim of this study is to evaluate the effects of different types of feedback given by an LLM (GPT-4) on subjective and objective (biobehavioural) measures of stress and other cognitive states using a driving rig and simulator (CARLA). We adopted an exploratory, empirical investigation for assessing safety-critical scenarios in which manual, and autonomous drive vehicles can use LLMs. The most significant findings were that: i) LLM feedback is reported as more relevant when scenarios had low visibility or the safety critical event was proximal; ii) short-length (versus long-length) vocalised feedback entails less reported stress in the MD condition; iii) short feedback entails less stress in MD given by Stress Index (SI), and specifically SI indicated less stress in proximal safety critical events (than long feedback) in MD; iv) long LLM feedback entails less stress than short feedback in the AV condition (both rated and for SI) and also less stress in AV for low visibility scenarios (given by SI). Additionally, eye tracking measures indicated less random exploration, and therefore more visual attention, to scenes when feedback was short and driving was manual (MD). We discuss potential avenues for future research exploration.

This publication uses ECG, Eye Tracking and Eye Tracking Screen Based which is fully integrated into iMotions Lab

Learn more

Learn more about the technologies used

Other publications you might be interested in