Emotions are the essence of what makes us human. They impact our daily routines on our way to work, our attention, perception, and memory. Decision-making and social interaction are also heavily driven by emotions.
One of the strongest indicators for emotions is our face – as we laugh or cry we’re putting our emotions on display, allowing others to infer our current emotional state.
So what is facial expression analysis and what can it tell?
Humans are quite adept when it comes to reading faces. Grounded on the observation and evaluation of subtle changes in facial key features such as eyes, brows, lids, nostrils, and lips, we’re trying to read someone else’s face on a moment-to-moment basis in order to glimpse into their minds.
Recent progress in computer vision and computational algorithms has made it possible to mimic the face reading skills of humans based on the tracking of subtle changes in facial features on a moment-by-moment basis.
Facial expression analysis delivers unfiltered, unbiased emotional responses
This innovative, video-based analysis technology has revolutionized quantified marketing and usability research as it captures unfiltered, unbiased emotional responses toward any type of stimulus and constitutes the ideal way to evaluate the likeability, effectiveness and viral potential of any content.
Computer-based facial coding delivers valuable information on the quality of the emotional response, generally referred to as its valence, which ranges from negative to positive.
On the far negative end of the valence scale you might find negatively connoted emotions such as sadness and anger (“oh no, my computer is hanging!”). On the far positive end of the valence scale you certainly will find emotions such as “joy” and “happiness” (“yay, weekend!”).
However, where on the valence scale would you place the emotion surprise? Does it rather have a positive or a negative valence? In fact, it has both as it can either reflect a highly pleasant sentiment (“oh, a present!”) or rather have an unpleasant connotation (“that tastes weird!”).
In this case, you might want to consider a combined analysis of surprise and other emotion channels in order to determine whether the surprise expression was accompanied by increases in happiness/joy or instead any of the negative emotions such as anger, disgust, sadness or contempt.
Limitations of facial expression analysis
One of the core limitations of facial expression analysis is its inability to assess someone’s emotional arousal, that is, the intensity of an emotion. You might ask why this is the case – doesn’t a bright smile indicate a more intense, more arousing feeling of happiness?
While it might be the case for the emotional assessment of a single person, it is more complex when comparing emotional expressions across different stimuli, respondents, or groups.
Complement facial expression analysis with eye tracking, GSR or EEG
If you aim to portray the emotional engagement of a larger audience or consumer group in response to emotionally loaded stimuli (e.g., ads, shows, pictures, videos) in its full complexity, it is essential to assess both the valence of the emotional expression and the associated arousal.
That’s exactly where the tremendous value of multimodal biometric research comes in.
Three of the most widely used biomarkers of emotional arousal are eye tracking, galvanic skin response (GSR), and electroencephalography (EEG). Combine them with facial analysis expression to paint the entire picture and allows you to get insights into both the valence (quality) of an emotional response as well as the amount of arousal (intensity) it triggers in your respondents.
Collecting synchronized data from multiple modalities adds more to the picture as every sensor contributes a valuable aspect of emotional responses that you cannot get with any other. You might even uncover a previously unknown, entirely new effect in cognitive-emotional processing.
Reach out to our team at iMotions to learn how you can enhance video-based facial expression analysis with other physiological modalities.