Multimodal online learning environment improves learning experience through different modalities such as visual, auditory, and kinesthetic interactions. Multimodal learning analytics (MMLA) with multiple biosensors provides a way to overcome the challenge of analyzing the multiple interaction types simultaneously. Galvanic skin response/electrodermal activity (GSR/EDA), eye tracking and facial expression were used to measure the learning interaction in a multimodal online learning environment. iMotions and R software were used to post-process and analyze the time-synchronized biosensor data. GSR/EDA, eye tracking and facial expression showed real-time cognitive, emotional, and visual learning engagement for each interaction type. There is a tremendous potential for using MMLA with multiple biosensors to understand learning engagement in a multimodal online learning environment was shown in this study.
Related Posts
-

Desire Before Delight: Why Wanting Drives Consumer Choice More Than Liking
Consumer Insights
-

Top 5 Publications of 2025
Academia
-

Applying Choice Architecture in Marketing, Retail, and Consumer Contexts
Consumer Insights
-

Jaguar’s Type 00 Ad: Smiles, Confusion, and Curiosity – What Viewers Really Felt About it
Consumer Insights

