Multimodal online learning environment improves learning experience through different modalities such as visual, auditory, and kinesthetic interactions. Multimodal learning analytics (MMLA) with multiple biosensors provides a way to overcome the challenge of analyzing the multiple interaction types simultaneously. Galvanic skin response/electrodermal activity (GSR/EDA), eye tracking and facial expression were used to measure the learning interaction in a multimodal online learning environment. iMotions and R software were used to post-process and analyze the time-synchronized biosensor data. GSR/EDA, eye tracking and facial expression showed real-time cognitive, emotional, and visual learning engagement for each interaction type. There is a tremendous potential for using MMLA with multiple biosensors to understand learning engagement in a multimodal online learning environment was shown in this study.
Related Posts
-
How Biosensors Help Contextualize Type I and Type II Errors in Experimental Psychology Research
-
10 Areas Where Simulation Research Delivers Deep Behavioral Insight
-
Multiface Analysis in Action: Advanced Methods for Studying Facial Expressions in Group Settings
-
Memory and Visual Attention: 5 Essential Eye-Tracking Experiments you should not miss
