Facial Expression Analysis in iMotions: A Comprehensive Technical and Research Guide

TL;DR

Facial Expression Analysis (FEA) in iMotions is the automated, computer-vision-based measurement of facial muscle movements to infer emotional states. The iMotions FEA module integrates Affectiva’s AFFDEX engine—widely recognized as a leading system for automated facial coding—to detect seven core emotions, up to 20 Action Units (AUs), valence, engagement, head pose, and blink metrics from live webcam feeds or pre-recorded video.

The module operates within the broader iMotions multimodal research platform, enabling time-synchronized analysis alongside Eye Tracking, EEG, GSR / EDA, ECG, EMG, Voice Analysis, and integrated survey data. Facial Expression Analysis is available in both iMotions Lab (desktop, controlled environments) and iMotions Online (browser-based, remote research).

The methodology is grounded in the Facial Action Coding System (FACS), developed by Paul Ekman and Wallace V. Friesen. All processing runs locally within the iMotions software environment, without requiring an internet connection.

Table of Contents

1. What Is Facial Expression Analysis in iMotions?

Facial Expression Analysis (FEA) is defined as the automated, objective measurement and classification of facial muscle movements to quantify emotional expressions and affective states. In the context of iMotions software, Facial Expression Analysis refers specifically to the use of Affectiva’s AFFDEX artificial intelligence engine, integrated directly into the iMotions platform, to perform real-time or post-hoc analysis of participant facial behavior during research studies.

Unlike manual Facial Action Coding System (FACS) coding, which requires trained human coders to label individual facial muscle movements frame by frame, automated facial expression analysis in iMotions processes video input continuously and outputs time-stamped, quantified metrics for each detected facial movement and emotion. This approach removes the subjectivity and time cost of manual coding while maintaining alignment with the same scientific framework (FACS) that manual coding uses.

Facial Expression Analysis

The iMotions Facial Expression Analysis module is classified as a biometric research module. The module operates alongside other sensor modules, including EEG, GSR / EDA, Eye Tracking, and Voice Analysis, within the iMotions platform architecture. All data streams, including Facial Expression Analysis output, are time-synchronized to a shared timeline.

Time synchronization is defined as the alignment of multiple data streams to a common temporal reference. Time synchronization enables multimodal research designs in which facial expression data can be directly correlated, second by second, with gaze behavior, physiological arousal, or neural activity.

iMotions is currently the exclusive provider of Affectiva’s AFFDEX in-lab SDK for research applications. The AFFDEX engine has been cited in more than 7,000 academic publications, establishing the system as one of the most validated automated facial coding systems in the behavioral science literature.

2. Theoretical Foundation: The Facial Action Coding System (FACS)

The Facial Action Coding System (FACS) is defined as a taxonomic framework for describing all visually distinguishable facial movements in terms of discrete anatomical units called Action Units (AUs). FACS was developed by psychologist Dr. Paul Ekman and Wallace Friesen in the 1970s and remains the dominant objective system for coding facial expressions in scientific research.

Each Action Unit corresponds to the contraction of one or more specific facial muscles. AU 4, for example, represents brow lowering (corrugator supercilii muscle activity), while AU 12 represents lip corner pulling (zygomaticus major activity — the primary indicator of smiling). FACS does not directly classify emotions; rather, it encodes the physical muscle movements from which emotion inferences are derived.

The connection between specific AU combinations and discrete emotional categories derives from Ekman and Friesen’s Emotional Facial Action Coding System (EMFACS), which identified AU patterns associated with seven universally recognized basic emotions: joy, anger, fear, surprise, sadness, contempt, and disgust. The AFFDEX engine in iMotions applies machine learning models trained on large-scale naturalistic datasets to detect these AU patterns automatically and map them to emotional classifications.

This theoretical grounding is important for research validity: iMotions FEA outputs are not produced by an opaque “black box” classifier. The emotion metrics produced by AFFDEX are directly traceable to specific AU activations, which are in turn traceable to specific facial muscle contractions, which FACS has validated as correlates of emotional states. This traceability allows researchers to inspect and report the mechanistic basis of their emotion data.

3. The AFFDEX Engine: Core Technology

AFFDEX is defined as Affectiva’s proprietary artificial intelligence toolkit for real-time facial expression analysis, built on deep learning models trained on large-scale, naturalistic facial expression datasets collected from participants representing diverse demographic groups.

AFFDEX 2.0, the current version available through iMotions, introduced significant advances over its predecessor:

  • Expanded emotion detection: AFFDEX 2.0 detects the seven basic emotions plus two additional affective states — sentimentality and confusion — extending the system’s utility for advertising research, gaming, and clinical assessment.
  • Improved accuracy under challenging conditions: The deep learning models in AFFDEX 2.0 are trained to handle diverse lighting conditions, partial facial occlusion, varied demographics, and non-frontal head poses with greater accuracy than prior versions.
  • 3D head pose estimation: AFFDEX 2.0 estimates head pose in three dimensions (pitch, yaw, roll), enabling attention and engagement scoring beyond simple emotion classification.
  • Multi-face tracking: AFFDEX 2.0 can simultaneously track and analyze multiple faces within a single camera feed, enabling group research designs.
  • Local processing: All AFFDEX computations run locally on Windows and Linux hardware without requiring internet connectivity or cloud data transfer, an important consideration for participant data privacy and GDPR compliance.

The AFFDEX engine has been trained on thousands of participants from multiple demographic groups and has demonstrated performance advantages over competitor systems in naturalistic, real-world conditions. Independent peer-reviewed validation studies (see Lewinski et al., 2014; Beringer et al., 2019) have evaluated AFFDEX’s classification accuracy and confirmed that its output for clearly expressed emotions (particularly joy and anger) is comparable to facial EMG measurements.

4. How Facial Expression Analysis Works in iMotions: Step-by-Step Pipeline

The FEA data pipeline in iMotions proceeds through the following stages, whether operating in real-time (live capture) or post-processing (video import) mode.

Step 1 — Video Input Acquisition

A standard webcam, dedicated research camera, or previously recorded video file provides the facial video input. In iMotions Lab, the webcam is configured directly within the platform. In iMotions Online, the participant’s browser-accessible webcam is used. Minimum recommended resolution is 720p at 30 frames per second for reliable AU detection.

Step 2 — Face Detection and Tracking

The AFFDEX engine applies a face detection algorithm to each video frame, locating and bounding the face region. Once detected, facial landmarks (key anatomical points on brows, eyes, nose, and mouth) are tracked across frames to maintain continuity even through minor head movements.

Step 3 — Action Unit Detection

For each frame, AFFDEX analyzes the displacement and deformation of facial landmarks relative to baseline (neutral) facial geometry. The engine outputs intensity scores for up to 20 discrete Action Units. Each AU intensity score is a continuous value indicating the degree of activation of the corresponding facial muscle group.

Step 4 — Emotion Classification and Metric Derivation

Machine learning classifiers trained on FACS-labeled datasets interpret AU activation patterns to produce probability scores for each of the detected emotional states. Composite metrics — including valence and engagement — are computed from these scores. Blink detection and head pose estimation are derived in parallel from the landmark tracking data.

Step 5 — Timestamp Synchronization

All FEA outputs are timestamped within iMotions’ unified timeline. This timeline is shared with all other active sensor modules, so each frame of facial data is aligned with concurrent gaze, EEG, EDA/GSR, or voice data from the same millisecond-level time reference. Stimulus event markers (onset and offset of images, video clips, or interactive tasks) are also embedded in the timeline, enabling stimulus-locked analyses.

Step 6 Visualization and Export

Processed FEA data are displayed in iMotions’ signal viewer as overlaid waveforms on the study timeline. Researchers can view AU intensities, emotion probabilities, valence, and engagement tracks alongside stimulus events. Data can be exported to CSV or JSON formats for external analysis in R, Python, SPSS, or MATLAB. iMotions also provides built-in R notebook workflows for common FEA analysis tasks.

5. Key Features and Capabilities

Real-Time Emotion Detection

iMotions FEA operates in real-time during live data collection sessions, providing researchers with immediate visual feedback on participant emotional state. Real-time data enables study designs where stimulus presentation or task parameters can be adapted based on detected emotional response.

Video Import and Post-Processing

Researchers who have already collected facial video recordings — from field studies, clinical sessions, or third-party recordings — can import those videos directly into iMotions for AFFDEX processing. This capability decouples video capture from analysis, supporting retrospective studies and naturalistic field research where real-time processing is not practical.

Multi-Face Tracking

AFFDEX 2.0 supports simultaneous tracking of multiple faces within a single camera frame. In iMotions, this enables research designs involving dyadic interaction, group responses to shared stimuli, or naturalistic social environments. Each tracked face receives independent AU, emotion, and metric outputs.

Quality Scoring

AFFDEX generates a per-frame data quality score that reflects factors including face visibility, lighting adequacy, and occlusion. iMotions uses this score to flag or exclude low-quality frames during analysis, allowing researchers to set minimum quality thresholds (commonly ≥75% mean quality score per participant) before including data in analysis.

Stimulus-Synchronized Analysi

Because FEA data shares the iMotions timeline, stimulus markers are automatically co-registered with FEA output. Researchers can extract FEA metrics for predefined stimulus intervals (e.g., average valence during a 30-second ad clip) without manual timestamp alignment.

Built-In Visualization

iMotions provides FEA signal overlays on stimulus video, aggregate heatmaps for group-level data, emotional circumplex plots (valence × arousal), and exportable visualization reports. In iMotions Online, these visualizations are available as dynamic or static signal overlays on stimulus materials.

No Required Internet Connection

All AFFDEX processing runs locally on the researcher’s hardware. Participant facial data is not transmitted to external servers during standard iMotions Lab operation, which is relevant to IRB compliance and data protection regulations.

6. Metrics and Outputs

The iMotions FEA module produces the following categories of quantified output:

Action Units (AUs)

Action Units are defined as scores representing the intensity of activation of specific facial muscle groups, as described in the FACS framework. iMotions provides intensity scores for up to 20 AUs. Key AUs include AU 1 (inner brow raise), AU 2 (outer brow raise), AU 4 (brow lowering), AU 6 (cheek raise), AU 12 (lip corner pull — smiling), AU 17 (chin raise), and AU 43 (eyes closing). Each AU is scored as a continuous value per frame.

Basic Emotion Scores

Seven emotion probability scores are output per frame: joy, anger, fear, surprise, sadness, contempt, and disgust. AFFDEX 2.0 adds sentimentality and confusion as additional classified states. Each score represents the model’s confidence (scaled 0–100 or 0–1 depending on output format) that the detected AU pattern matches a given emotional expression.

Valence

Valence is defined as the dimension of emotional experience representing the positive or negative quality of an emotional state. In iMotions FEA, valence is a derived composite metric, typically reported on a scale from negative to positive, computed from the balance of positive (joy) and negative (anger, sadness, disgust, fear, contempt) emotion scores. Valence does not directly measure the intensity of an emotion; it measures its hedonic direction.

Engagement

Engagement is defined in the iMotions FEA context as a composite metric derived from facial expression activity, representing the degree to which a participant’s face is actively expressing any emotional response — positive or negative. High engagement indicates a face that is actively reacting; low engagement corresponds to a neutral or unexpressive face. Engagement is used in advertising and media research as an indicator of attention and processing depth.

Arousal

Arousal is defined as the dimension of emotional experience representing the intensity or activation level of a response, independent of its valence. While FEA-derived arousal has known limitations (facial expression captures only visible expression, not internal physiological arousal), iMotions supports the combination of FEA-derived valence with EDA/GSR-derived arousal to produce a two-dimensional affective state estimate. This two-dimensional (valence × arousal) model aligns with the circumplex model of affect established by Russell (1980).

Head Pose Metrics

AFFDEX 2.0 outputs 3D head pose estimation (pitch, yaw, roll) per frame, enabling attention scoring based on head orientation toward or away from a stimulus.

Blink rate and blink frequency are detected from eyelid landmark movements and are available as FEA output signals. Blink metrics can serve as supplementary indicators of cognitive load and attention.

7. Supported Setups and Environments

Laboratory Environment (iMotions Lab)

iMotions Lab is the desktop-based platform designed for controlled laboratory research. In a lab setup, FEA is conducted using a dedicated research-grade webcam or integrated laptop camera positioned at eye level with consistent lighting. The participant is seated at a fixed workstation. Stimulus presentation (images, video, web content, tasks) is managed within iMotions Lab. Lab-based FEA enables the highest data quality through controlled lighting, fixed camera distance, and minimal head movement. FEA in iMotions Lab is fully synchronizable with all other biosensor modalities (EEG, GSR, EMG, ECG, eye tracking, voice).

Online and Remote Research (iMotions Online, Remote Data Collection, iMotions Education & Media Analytics)

iMotions Online is the browser-based platform for remote FEA without in-person participant access. Participants join a study through a web link, grant webcam access in their browser, and complete stimulus tasks while FEA is performed via AFFDEX in the browser environment.

iMotions Online allows visualization of facial expression data as signal overlays on stimuli, with options to export visualizations for reporting. The trade-off of online FEA compared to lab FEA is reduced control over participant lighting, camera quality, and head position, which increases within-sample noise and may lower data quality scores. Appropriate participant guidance and quality filtering are recommended.

Field Research and Naturalistic Settings

iMotions Lab supports field-based FEA using portable cameras and laptops when research questions require naturalistic environments (retail, automotive, clinical). AFFDEX’s robustness to varying lighting and pose conditions (improved in version 2.0) increases its practical utility outside controlled labs. Field FEA is commonly combined with mobile eye tracking hardware supported by iMotions Lab.

Simulator Research

iMotions Lab is being used in driving, flying, sailing simulator research and other simulation environments where FEA captures driver emotional state (fatigue, stress, distraction) while eye tracking and physiological sensors capture attention and arousal simultaneously.

8. Integration with Other Modalities

The defining feature of facial expression analysis in iMotions — relative to standalone facial coding tools — is its deep integration with other biometric and behavioral modalities. All data streams in iMotions share a unified, millisecond-level timestamp reference, so FEA outputs are inherently co-registered with every other sensor’s data.

Eye Tracking + FEA

Eye tracking data records where a participant is looking and for how long. FEA data records what emotional response is occurring at each moment. The combination enables researchers to determine not only whether a participant looked at a stimulus element, but what emotional response occurred during fixation. This pairing is standard practice in advertising research, where researchers need to distinguish attention (fixation) from positive or negative reaction (valence).

EDA/GSR + FEA

Electrodermal activity (EDA), also called galvanic skin response (GSR), measures skin conductance as an index of sympathetic nervous system arousal. Because FEA valence captures the direction of an emotional response and EDA captures its intensity, the combination of FEA valence and EDA arousal allows construction of a two-dimensional affective state representation consistent with the circumplex model of affect. This pairing is frequently used in media testing and UX research.

EEG + FEA

Electroencephalography (EEG) measures electrical brain activity and provides indices of cognitive and affective processing (e.g., frontal alpha asymmetry as a correlate of approach/withdrawal motivation). Combining EEG with FEA allows researchers to distinguish between internally experienced emotional states (reflected in neural activity) and externally expressed emotional states (captured in facial behavior). The two measures are not always correlated, and their divergence can itself be informative.

EMG + FEA

Facial electromyography (EMG) measures electrical activity in specific facial muscles (commonly the zygomaticus major and corrugator supercilii) using surface electrodes. Peer-reviewed research has confirmed that AFFDEX output for joy and anger detection correlates significantly with simultaneous EMG measurements of the same muscles (Frontiers in Psychology, 2020). EMG provides a more sensitive measure of low-intensity expressions that may be below the threshold of camera-based detection, while FEA provides a non-invasive, electrode-free alternative for expressions of sufficient intensity.

Voice Analysis + FEA

iMotions supports voice analysis as a separate module that derives behavioral and psychological indices from speech prosody (pitch, tone, rhythm, energy). Combining voice analysis with FEA allows researchers to study multimodal emotional expression — for example, detecting concordance or discordance between facial and vocal emotional signals, which is relevant in communication research, clinical psychology, and human-computer interaction design.

Survey and Behavioral Data + FEA

iMotions allows time-stamped survey responses and behavioral interaction logs (mouse clicks, keyboard inputs) to be embedded in the same timeline as FEA data. This enables direct comparison of explicit self-reported emotional ratings with implicitly measured facial emotion responses.

9. Use Cases by Industry

Market Research and Advertising

Advertising and content testing represents one of the largest application domains for facial coding iMotions. Researchers use FEA to measure second-by-second emotional engagement and valence during ad exposure, identifying which moments in a spot drive positive response and which generate disengagement or negative affect. FEA enables testing of multiple ad versions with objective emotional comparison metrics, replacing or augmenting traditional dial testing.

User Experience (UX) Research

UX researchers use FEA to capture emotional reactions during product interactions, website navigation, app usability testing, and prototype evaluation. FEA detects frustration (AU 4 + AU 17), confusion, and satisfaction signals that participants may not articulate verbally or rate accurately on post-hoc surveys, particularly for low-intensity or ambiguous emotional moments.

Academic Psychology and Emotion Research

Academic researchers use iMotions FEA to measure emotional responses to validated stimulus sets (IAPS, GAPED, video clips) in studies of emotion regulation, social cognition, clinical populations, and affective computing. The AFFDEX engine’s grounding in FACS makes it interpretable within established theoretical frameworks. It has been cited in thousands of peer-reviewed publications in psychology, neuroscience, and human-computer interaction.

Healthcare and Clinical Research

FEA is used in clinical research to assess emotional expressivity in populations with conditions affecting facial behavior, including depression, autism spectrum disorder, Parkinson’s disease, and PTSD. FEA provides an objective, low-burden measurement tool that does not require participants to self-report or undergo invasive procedures. iMotions FEA has been used to support patient diagnostics, therapy monitoring, and psychological assessment research.

Education Research

Education researchers use FEA to quantify student emotional engagement during learning — identifying moments of frustration, confusion, or interest in response to curriculum materials. FEA paired with eye tracking in iMotions enables simultaneous measurement of where students look and how they emotionally respond, informing instructional design and content optimization.

Automotive and Driver Monitoring

Automotive researchers use FEA to detect driver drowsiness, distraction, and emotional stress states. AFFDEX 2.0’s capabilities in head pose estimation and multi-face tracking are particularly relevant in simulator and in-vehicle research setups. iMotions FEA is used alongside ECG, respiration, and eye-tracking data in driver safety research.

Human-Computer Interaction (HCI)

HCI researchers use FEA to evaluate emotional responses to interfaces, conversational agents, robots, and AI systems. FEA provides moment-by-moment affective feedback that is not available through traditional usability metrics such as task completion time or error rates.

10. Advantages Over Traditional Methods

Compared to Self-Report Surveys

Self-report surveys measure retrospective, consciously accessed, and verbally expressible evaluations. FEA measures expressed emotional responses as they occur in real time, including responses that are unconscious, pre-verbal, or below the threshold of introspective access. FEA is immune to post-hoc rationalization, social desirability bias, and recall error — all documented limitations of self-report methodology. Unlike surveys, FEA provides continuous time-series data rather than a single aggregate rating.

Compared to Manual FACS Coding

Manual FACS coding requires trained coders with extensive certification, is highly time-intensive (a single minute of video can require several hours of coding), and introduces inter-rater reliability concerns. Automated facial expression analysis in iMotions provides equivalent or superior coding speed (real-time), consistent output without coder fatigue or bias, and lower per-participant cost at scale. Manual coding retains advantages in sensitivity to rare or subtle AU configurations not captured by the AFFDEX training set.

Compared to Dial Testing

Dial testing asks participants to continuously rotate a physical dial to indicate moment-by-moment preference or engagement, which introduces a dual-task burden that may itself affect emotional response. FEA requires no conscious task from the participant and captures expression without response-format artifacts.

Compared to Facial EMG

Facial electromyography requires electrode placement on participants’ faces, which is invasive, uncomfortable, and may itself alter emotional expression. EMG is inherently limited to the specific muscles where electrodes are placed. FEA via iMotions requires only a camera, introduces no participant burden beyond sitting in front of a screen, and simultaneously tracks the entire visible face surface. For expressions of sufficient intensity, validated research confirms FEA output is comparable to EMG output for joy and anger detection.

11. Limitations and Considerations

Expression vs. Experience Distinction

Facial expression analysis measures facial muscle movements — the behavioral output of emotional experience — not internal emotional states directly. Facial expressions and internal subjective emotion are related but distinct constructs. A participant may experience a strong emotional response without showing visible facial expression (masked expression), or may produce facial movements for social reasons that do not reflect internal state. FEA should be interpreted as a measure of expressed emotion, not as a direct readout of felt emotion.

Lighting and Camera Quality Dependency

AFFDEX requires adequate and consistent lighting to produce reliable AU and emotion classifications. Variable lighting (backlighting, strong shadows, rapid illumination changes) degrades detection accuracy. In online and field research, lighting cannot be controlled by the researcher, increasing data noise. Quality score filtering (excluding frames or participants below a threshold quality score) is a standard mitigation.

Head Pose Constraints

While AFFDEX 2.0 has improved performance at non-frontal head poses, extreme lateral rotation, chin-down, or chin-up positions reduce landmark detection reliability and lower AU detection accuracy. Participants who frequently look away from a screen stimulus will have lower face tracking quality during those intervals.

Cultural and Demographic Generalizability

The AFFDEX training dataset spans multiple demographic groups; however, the mapping of AU combinations to emotion categories is based primarily on Ekman’s theory of basic emotion universality, which has been critiqued in cross-cultural psychology literature. Researchers working with populations whose norms of emotional expression differ from those represented in the training data should interpret results with appropriate caution and consider supplementary self-report validation.

Arousal Estimation Limitations

FEA-derived engagement is not equivalent to physiological arousal measured by EDA/GSR or EEG. The facial expression channel captures visible expression but not the autonomic nervous system activity that constitutes physiological arousal. Researchers requiring arousal measurement should use FEA in combination with EDA/GSR rather than relying on FEA engagement scores alone as arousal proxies.

Low-Intensity and Spontaneous Expression Sensitivity

Camera-based FEA systems, including AFFDEX, perform more accurately on posed or clearly expressed emotions than on subtle, low-intensity spontaneous expressions. In naturalistic research where emotional responses are mild or briefly expressed, FEA may under-detect genuine affective responses. Facial EMG retains sensitivity advantages at very low expression intensities.

12. When to Use Facial Expression Analysis vs. Alternatives

Research ScenarioRecommended Modality
Objective, real-time emotion tracking with no participant burdenFEA (iMotions AFFDEX)
Low-intensity or subtle expression detectionFacial EMG (or FEA + EMG combined)
Measurement of autonomic arousal intensityEDA/GSR
Internal emotional state independent of expressionEEG (frontal alpha asymmetry)
Explicit, deliberate preference or opinion ratingsSelf-report survey
Moment-to-moment engagement with visual stimuliFEA + Eye Tracking
Remote, large-sample, scalable emotion researchiMotions Online (webcam FEA)
High-precision lab study with maximum multimodal dataiMotions Lab (FEA + EEG + GSR + ET)
Cultural or population contexts with limited FEA validationSelf-report + FEA (triangulated)

Facial expression analysis in iMotions is most appropriate when: (a) the research question requires objective, non-intrusive measurement of emotional expression over time; (b) self-report measures are insufficient due to retrospective bias, conscious access limitations, or response format constraints; (c) time-locked stimulus-response data are required at millisecond precision; and (d) multimodal integration with other behavioral or physiological signals is desirable.

13. Data Privacy and Ethical Considerations

Facial expression data is biometric data and is subject to data protection regulation in many jurisdictions, including the European Union’s General Data Protection Regulation (GDPR) and various state-level regulations in the United States. Research use of iMotions FEA should be covered by institutional IRB or ethics board approval, with explicit informed consent from participants covering the collection, storage, and analysis of facial video data.

iMotions Lab’s local processing architecture means that participant facial video and derived FEA metrics are not transmitted to Affectiva’s or iMotions’ servers during standard operation. Data remains within the researcher’s controlled environment. Researchers should establish retention and anonymization policies for video recordings consistent with their institutional data governance requirements.

14. FAQ: Facial Expression Analysis in iMotions

What is facial expression analysis software?

Facial expression analysis software is defined as a computer vision and machine learning application that automatically detects, tracks, and classifies facial muscle movements from video input to derive quantified emotion and expression metrics. iMotions is a leading example of such software in academic and commercial research contexts, integrating Affectiva’s AFFDEX engine for this purpose.

What is Affectiva AFFDEX and how does it relate to iMotions?

Affectiva AFFDEX is the proprietary AI engine developed by Affectiva (now part of iMotions) that performs automated facial expression analysis based on the Facial Action Coding System. iMotions is the exclusive distributor of the AFFDEX in-lab SDK for research applications and integrates AFFDEX directly into the iMotions Lab and iMotions Online platforms as the FEA module.

Can iMotions perform facial expression analysis without specialized hardware?

Yes. iMotions FEA requires only a standard webcam (minimum 720p at 30fps recommended). No specialized biosensor hardware is required for FEA alone. The module runs locally on Windows computers meeting iMotions’ standard system requirements. Internet connectivity is not required during data collection.

What emotions does iMotions detect using facial expression analysis?

iMotions FEA using AFFDEX detects seven basic emotions: joy, anger, fear, surprise, sadness, contempt, and disgust. AFFDEX 2.0 additionally detects sentimentality and confusion. Composite metrics including valence, engagement, and 3D head pose are also produced.

What are Action Units in the context of iMotions facial coding?

Action Units (AUs) are defined as discrete codes in the Facial Action Coding System (FACS) representing the activation of individual facial muscle groups. iMotions FEA outputs intensity scores for up to 20 AUs per video frame. AU scores provide granular, mechanistically interpretable data that is more fundamental than high-level emotion classifications and can be used for custom analysis of specific facial behaviors.

Can iMotions perform facial expression analysis on pre-recorded video?

Yes. iMotions Lab supports video import for post-processing FEA. Researchers can import facial video recordings collected outside of iMotions (including from field studies, clinical sessions, or prior studies) and run AFFDEX analysis on the imported footage. Output timing is referenced to the video timeline.

How does webcam facial expression analysis in iMotions Online differ from lab-based analysis?

iMotions Online performs FEA via the participant’s own webcam through a browser interface, enabling remote research without participant travel or lab access. The primary differences from lab-based FEA are reduced control over lighting, camera quality, and participant positioning, which can increase data noise. Appropriate quality filtering and participant instructions are recommended to mitigate these differences.

Is iMotions facial expression analysis validated in peer-reviewed research?

Yes. AFFDEX and iMotions FEA have been the subject of numerous peer-reviewed validation studies. Research published in journals including Behavior Research Methods (Lewinski et al., 2014; Stöckli et al., 2018) and Frontiers in Psychology (Beringer et al., 2020) has evaluated AFFDEX classification accuracy against standardized stimulus sets and compared FEA output to facial EMG measurements, confirming that FEA output for clearly expressed emotions is comparable to EMG-based measurement. The AFFDEX engine has been cited in more than 7,000 academic publications.

References and Further Reading


Get Richer Data

About the author


See what is next in human behavior research

Follow our newsletter to get the latest insights and events send to your inbox.