When it comes to decrypting emotional responses, facial expression analysis is one fabulous tool. Why’s that?

With facial coding, you can look into the impact of basically any content, product or service believed to prompt emotional arousal and facial responses – physical objects such as food probes or packages, videos, images, sounds, odors, tactile stimuli, etc.  

Besides actual objects, facial expressions are also driven by mentalizations, memories, and thoughts.

Think of this: You don’t necessarily need to hop on a plane and fly to Brazil to be all smiles and feel cheery in Rio’s Copacabana. Just the thought of ambling along the sandy beach might put the exact same grin on your face and make your day (although we wouldn’t exactly mind dropping everything and stretching out in the sun).

Back to reality. How can facial expressions be collected?

The variety of application fields for facial expression analysis is incredibly wide. This might be accounted for by the sheer fact that facial expressions are surprisingly simple to collect.

Consumer neuroscience, neuromarketing, media testing, advertisement, website testing, psychological research, and many other research disciplines have lately been adopting facial expression analysis techniques to decode and light upon the subconscious processes driving emotional behavior.

In principle, facial muscle activity can be recorded and analyzed in three different ways.

Let’s get to each of them now.

Facial Electromyography (fEMG)

Admittedly, it might sound daunting. The theory behind it is downright straightforward.

How does facial EMG work?

fEMG uses electrodes attached to the skin surface to detect and amplify tiny electrical impulses generated by the activity of facial muscles around the eyebrows, cheekbones, and the mouth.  

More specifically, the most commonly used fEMG sites are in close proximity to two major muscle groups:

1. Right/left corrugator supercilii

Drawing the eyebrow downward towards the center of the face, the corrugator supercilii is a pyramidal muscle near the eyebrow, usually active when expressing negative emotions such as suffering.

2. Right/left zygomaticus (major)

The zygomaticus extends from each cheekbone to the corners of the mouth, commonly associated with positive emotions such as smiling.

Facial muscles

fEMG is non-invasive and delivers precise results. Being a highly sensitive method, fEMG allows to continuously collect even very subtle facial muscle activity in scenarios where respondents are instructed to suppress their emotional expression.

On the downside, fEMG requires electrodes, cables, and amplifiers. Needless to say fEMG is intrusive, thereby raising the respondents’ awareness of the measurement (and probably the level of uneasiness). Most importantly, fEMG analysis requires expert biosensor processing skills. In case you’re a newbie to facial expression analysis, you might encounter difficulties here.


The Facial Action Coding System (FACS)

Paul Ekman’s Facial Action Coding System (FACS) represents a fully standardized classification system of facial expressions based on anatomic features. Trained human coders accurately inspect face videos and describe any occurrence of facial expressions as combinations of elementary components called Action Units.

How does FACS work?

Each Action Unit is linked to an individual face muscle or muscle group and identified by a specific number (AU1, AU2, etc.). In principle, all facial expressions can be broken down into their constituent Action Units.

You might conclude that FACS is able to “read” emotions.  This actually is not quite true. In fact, FACS  is a measurement system that does not interpret the meaning of the expressions. Emotional interpretations emerge only during the data processing stage.

Example happy face

FACS is a non-intrusive, objective, and reliable method to collect facial expressions. Scores have a high face validity as they are based on visible changes in facial tissue.

One major disadvantage of FACS is that proper Action Unit scoring relies on the trained discrimination of experts, rendering the coding very time-intense and expensive. To quote but a few figures: It is not uncommon for a well-trained FACS coder to take about 100 minutes to code 1 minute of video data depending on the density and complexity of facial actions.

Automatic facial expression analysis

Automatic facial expression analysis is, as the name already implies, computer-based. Algorithms instantaneously detect faces, code facial expressions, and recognize emotional states. Sounds exciting? It definitely is.

How does automatic facial expression analysis work?

Automatic facial coding technologies use cameras embedded in laptops, tablets, mobile phones or standalone webcams that are mounted to computer screens in order to capture videos of respondents as they are exposed to emotional content of various intensity.

In more detail, automatic facial expression analysis follows this 3-step procedure:

Face detection example1. Face detection: The engine locates a face in a video frame or image, drawing a face box around the detected face.

2. Facial landmark detection and registration: The engine identifies facial landmarks such as eyes and eye corners, brows, mouth corners, nose tip, etc. Subsequently, a rather simplified face model is adjusted in position, size, and scale in order to match the respondent’s actual face. The face model only includes key features that are needed to get the job done.

Feature detection example3. Facial expression and emotion classification: Once the simplified face model is available, position and orientation information of all the key features is fed as input into classification algorithms. Comparing the actual appearance of the face and the configuration of the features numerically with the normative databases provided by the facial expression engines, the classification algorithms translate the features into Action Unit codes, emotional states, and other affective metrics.

Automated facial coding is non-intrusive and delivers precise results – since the classification is done for each emotion, Action Unit, and key feature independently, automatic coding is accomplished much more objectively compared to manual coding where coders (particularly novice coders) might  interpret the activation of an Action Unit in concert with other Action Units, thereby significantly altering the results.

Compared to FACS or fEMG, automatic facial expression analysis doesn’t require specialized high-class equipment, electrodes, cables, or amplifiers. Exactly this circumstance renders automated facial expression analysis ideally suited to capture face videos in a wide variety of naturalistic environmental settings.

Curious to dive deeper into the visualization and analysis of facial classification results? Stay tuned for our all-new definitive guide to facial expression analysis – it’s everything you need to know to get the knack of emotion analytics.