Towards More Robust Automatic Facial Expression Recognition in Smart Environments

Arne Bernin

Larissa Müller

Sobin Ghose

Kai von Luck

Christos Grecos

Qi Wang

Florian Vogt

Abstract: In this paper, we provide insights towards achieving more robust automatic facial expression recognition in smart environments based on our benchmark with three labeled facial expression databases. These databases are selected to test for desktop, 3D and smart environment application scenarios. This work is meant to provide a neutral comparison and guidelines for developers and researchers interested to integrate facial emotion recognition technologies in their applications, understand its limitations and adaptation as well as enhancement strategies. We also introduce and compare three different metrics for finding the primary expression in a time window of a displayed emotion. In addition, we outline facial emotion recognition limitations and enhancements for smart environments and non-frontal setups. By providing our comparison and enhancements we hope to build a bridge from affective computing research and solution providers to application developers that like to enhance new applications by including emotion based user modeling.

This publication uses Facial Expression Analysis which is fully integrated into iMotions Lab

Learn more