Music Recommendation System for Human Attention Modulation by Facial Recognition on a Driving Task: A Proof of Concept

Roberto Avila-Vázquez

Sergio Alberto Navarro Tuch

Rogelio Bustamante-Bello

Ricardo A. Ramirez-Mendoza

Abstract: The role of music on driving process had been discussed in the context of driver assistance as an element of security and comfort. Throughout this document, we present the development of an audio recommender system for the use by drivers, based on facial expression analysis. This recommendation system has the objective of increasing the attention of the driver by the election of specific music pieces. For this pilot study, we start presenting an introduction to audio recommender systems and a brief explanation of the function of our facial expression analysis system. During the driving course the subjects (seven participants between 19 and 25 years old) are stimulated with a chosen group of audio compositions and their facial expressions are captured via a camera mounted in the car’s dashboard. Once the videos were captured and recollected, we proceeded to analyse them using the FACET™ module of the biometric capture platform iMotions™. This software provides us with the expression analysis of the subjects. Analysed data is postprocessed and the data obtained were modelled on a quadratic surface that was optimized based on the known cestrum and tempo of the songs and the average evidence of emotion. The results showed very different optimal points for each subject, that indicates different type of music for optimizing driving attention. This work is a first step for obtaining a music recommendation system capable to modulate subject attention while driving.

This publication uses Facial Expression Analysis which is fully integrated into iMotions Lab

Learn more