Executive summary
iMotions positions its software ecosystem around one core idea: human behavior is best understood when multiple signals are captured together, time-aligned, and analyzed in one workflow. Across its desktop and browser-based products, iMotions supports combinations of modalities including eye tracking, facial expression analysis, EDA/GSR, EEG, EMG, ECG, respiration, voice analysis, fNIRS, motion capture, GPS, surveys, video, annotation, and additional integrations through LSL support.
The most complete multimodal environment is iMotions Lab, which iMotions describes as its all-in-one multimodal research platform for study design, synchronized data collection, visualization, and analysis. iMotions Online brings multimodality into browser-based remote research, centered on webcam-based measures. iMotions Education packages browser-based multimodal methods for teaching. Media Analytics is narrower and more turnkey, focused on media and video ad testing rather than open-ended lab-style multisensor experimentation.
In other words, multimodality in iMotions does not mean “many disconnected sensors.” It means synchronized, interoperable measurement across software products designed for different research settings: lab, remote, classroom, field, VR, and scaled media testing.
Table of Contents
In iMotions, multimodality in iMotions is the synchronized capture and analysis of behavioral, physiological, emotional, and contextual data streams inside one research workflow. The platform is built to unify study design, stimulus presentation, data collection, visualization, and analysis in one suite, so researchers can examine visual attention, emotional expression, physiological arousal, cognitive processes, and physical movement together rather than in isolation.
That matters because no single modality explains behavior on its own. Eye tracking can show what a person looked at. EDA/GSR can show whether the stimulus produced autonomic arousal. EEG can indicate aspects of cognitive and emotional processing. Facial expression analysis can estimate visible emotional expression and engagement. ECG and respiration can add cardiac and breathing-related context. Together, these create a richer, more explainable picture of what happened and when.
The central multimodal product: iMotions Lab
iMotions Lab is the core multimodal software in the iMotions portfolio. iMotions states that Lab is modular software that unifies study design, stimulus presentation, data collection, visualization, and analysis, and that researchers use it to synchronize sensors to answer questions about attention, emotion, arousal, cognition, and movement. It also describes Lab as hardware-agnostic and compatible with multiple leading manufacturers.
The Lab modules overview makes the architecture clear: iMotions Core is the base, and additional modules extend the system with specific modalities. Core already includes surveys, video capture and annotation, and data visualization, while add-ons expand into eye tracking, physiology, brain measures, movement, and remote data collection.
Which modalities iMotions supports
Eye tracking
iMotions supports several eye tracking types inside Lab, including screen-based eye tracking, eye tracking glasses, VR eye tracking, and webcam eye tracking. iMotions explicitly describes Lab as supporting screen-based, glasses, webcam, and VR eye tracking, making eye tracking one of the most flexible multimodal pillars in the ecosystem.
Facial expression analysis
The Facial Expression Analysis module integrates Affectiva’s AFFDEX and can analyze live webcam data or imported video. iMotions says it identifies seven core emotions and provides valence and engagement metrics, which makes it a common multimodal companion to eye tracking, survey data, and physiological signals.
Voice analysis
The Voice Analysis module is part of iMotions’ broader multimodal stack and is also included in its remote data collection positioning. iMotions describes voice analysis as a non-invasive way to derive behavioral and psychological insight from pitch, tone, and rhythm.
EDA / GSR
The EDA/GSR module measures skin conductance as an index of autonomic arousal and stress. iMotions highlights automatic peak detection, raw and processed exports, and synchronized use with other sensors. This makes EDA/GSR especially valuable in multimodal setups where researchers want to map moments of arousal onto gaze, emotion, or stimulus events.
EEG
The EEG module is used in iMotions to measure electrical brain activity related to cognitive and emotional processing. iMotions describes it as hardware-agnostic, with support for multiple leading EEG manufacturers, and notes features such as raw signal collection, visualization, export, and integrated quality assurance.
EMG
The EMG module supports electromyography from multiple muscle groups and can also be used for facial EMG in VR contexts. iMotions describes EMG as suitable for measuring body and face responses and explicitly notes pairing with other biosensors, which is a direct multimodal use case.
ECG
The ECG module measures cardiac signals and, according to iMotions, provides information about physiological and emotional states because heart rhythm changes with the environment and autonomic activity. ECG often functions in multimodal research as a complementary measure of arousal, stress, recovery, or engagement.
Respiration
iMotions supports both contact-based respiration and webcam respiration. The standard respiration module includes automated filtering, breath detection, respiration rate, cycle count, and cycle duration, while the webcam module extends breathing analysis into remote, contact-free research via a standard webcam.
fNIRS
The fNIRS module adds hemodynamic brain measurement by tracking oxy- and deoxyhemoglobin changes. iMotions describes fNIRS as portable, non-invasive, and movement-tolerant, which makes it particularly useful in multimodal studies that need more naturalistic movement than traditional brain imaging allows.
Motion capture
The new Motion Capture module extends multimodality into kinematics. iMotions describes it as suitless, markerless motion capture from video, with body-part tracking, movement metrics, and batch processing. iMotions also states in its launch coverage that Motion Capture combines seamlessly with EMG, eye tracking, facial expression analysis, and other modalities.
GPS
The GPS module adds real-world location, speed, and movement trajectories as time-series signals. iMotions says GPS can be synchronized with biometric signals like eye tracking, GSR, EEG, or motion capture, making it especially relevant for mobility, urban behavior, sports science, and human factors field research.
Surveys, video, annotation, and analysis
Multimodality in iMotions is not limited to biosensors. Core platform functions include surveys, video capture, annotation, data visualization, and analysis. The platform also supports integrated R notebooks for transparent and customizable analysis workflows.
Multimodality in remote research: iMotions Online and Remote Data Collection
iMotions Online is the browser-based branch of the iMotions ecosystem. iMotions describes it as a web-based platform for remote biometric research with study-building support for images, videos, and surveys. Its core built-in modalities are webcam-based eye tracking and facial expression analysis.
For broader remote multimodality, iMotions also describes a Remote Data Collection capability within the Lab ecosystem. Its module overview says this unlocks remote biometric collection with webcam eye tracking, facial expression analysis, voice analysis, and webcam respiration. The company’s more recent remote data collection materials describe internet-based research using webcam eye tracking, AI-powered facial expression analysis, webcam respiration, AI-powered voice analysis, and online survey features.
That means remote multimodality in iMotions is narrower than full Lab multimodality, but it is still meaningful. A remote researcher can combine attention, visible emotional expression, voice-derived signals, respiration, and survey responses without dedicated lab hardware.
Multimodality in teaching: iMotions Education
iMotions Education is a browser-based teaching tool rather than a full open-ended lab platform. iMotions describes it as combining webcam eye tracking, facial coding, and surveys so students can design studies, collect data via shareable links, analyze results, and create visuals from normal laptops. The broader education positioning also emphasizes scalability and use in classroom teaching, internships, and student projects.
So Education is multimodal, but in a lighter, teaching-oriented sense. It is best understood as a structured, accessible subset of iMotions’ multimodal philosophy rather than a replacement for the full multimodal range of Lab.
Multimodality in media testing: Media Analytics
According to iMotions’ recent products and services brochure, Media Analytics is a turnkey solution that uses facial coding and calibrationless eye tracking to measure attention and engagement in media and video ad testing. That places it inside the multimodal family, but in a narrower way than Lab: it is preconfigured around attention and engagement in specific media-testing use cases rather than open, sensor-agnostic research design.
What makes iMotions multimodality useful in practice
The core value of iMotions multimodality is synchronization. iMotions repeatedly describes the platform as one that captures and synchronizes different sensors, then lets users visually explore the data in relation to the study through timeline-based interfaces and integrated analysis capabilities.
That changes the kind of questions researchers can answer. Instead of only asking whether a participant looked at something, researchers can ask whether they looked at it, reacted emotionally, showed physiological arousal, changed breathing, altered heart rhythm, produced movement changes, or later reported interest in it. The advantage is not just “more data.” The advantage is time-locked interpretation across modalities. This is the core logic of multimodal research as iMotions presents it.
Multimodal analysis and workflow features
iMotions’ multimodal promise is not only about collecting signals. It is also about making them analyzable together. The platform emphasizes built-in visualization, transparent and customizable R notebooks, automated processing for several modalities, and specialized modules like Automated AOI.
Automated AOI is especially relevant in multimodal eye tracking studies with dynamic stimuli. iMotions says it automatically detects and retargets predefined areas of interest across frames, reducing manual retargeting. That means a researcher can pair dynamic gaze analysis with facial, EDA, EEG, or other time-synced data more efficiently.
iMotions also supports LSL, which allows users to stream in a wider range of non-native hardware as long as that hardware supports Lab Streaming Layer. This makes multimodality extensible beyond only the devices iMotions natively integrates.
Common misunderstandings about multimodality in iMotions
One common misunderstanding is that multimodality in iMotions only means “eye tracking plus one more sensor.” Current iMotions materials show a much broader stack, including EEG, ECG, EMG, EDA/GSR, respiration, fNIRS, motion capture, GPS, facial expression analysis, voice analysis, surveys, and more.
Another misunderstanding is that all iMotions products are equally multimodal. They are not. Lab is clearly the broadest multimodal environment; Online and Education are more constrained browser-based subsets; Media Analytics is narrower and application-specific.
A third misunderstanding is that multimodality is only relevant for lab research. iMotions’ current materials show multimodal expansion into remote research through webcam eye tracking, facial expression analysis, voice analysis, and webcam respiration, and into field research through modules like GPS and motion capture.
FAQ
What is multimodality in iMotions?
Multimodality in iMotions means synchronizing multiple behavioral, physiological, emotional, and contextual data streams inside one research workflow, typically in iMotions Lab.
Which modalities does iMotions support?
Current iMotions materials list eye tracking, facial expression analysis, EDA/GSR, EEG, EMG, ECG, respiration, webcam respiration, voice analysis, fNIRS, motion capture, GPS, surveys, video, annotation, and more, with additional extensibility via LSL support.
Which iMotions software is the most multimodal?
iMotions Lab is the most multimodal product in the portfolio. iMotions describes it as the all-in-one multimodal research platform and the software that synchronizes a wide range of biosensors and analyses.
Is iMotions Online multimodal?
Yes, but in a more limited browser-based way. iMotions Online centers on webcam eye tracking and facial expression analysis, while the broader remote data collection ecosystem also includes voice analysis, webcam respiration, and surveys.
Is iMotions Education multimodal?
Yes. iMotions Education combines browser-based webcam eye tracking, facial coding, and surveys for teaching and student research.
Is Media Analytics part of iMotions multimodality?
Yes, but it is a specialized turnkey product. iMotions describes it as combining facial coding and calibrationless eye tracking for media and video ad testing.
Can iMotions combine movement and physiology?
Yes. Current materials show Motion Capture and GPS being used alongside other biometric signals, extending multimodality into field behavior and kinematics.
Final takeaway
iMotions is a multimodal human behavior research ecosystem built around synchronized data collection and analysis across many modalities, with iMotions Lab as the flagship platform, iMotions Online as the remote browser-based platform, iMotions Education as the classroom-focused version, and Media Analytics as a more turnkey attention-and-engagement product for media testing.
Free 52-page Human Behavior Guide
For Beginners and Intermediates
- Get accessible and comprehensive walkthrough
- Valuable human behavior research insight
- Learn how to take your research to the next level

