Multimodal online learning environment improves learning experience through different modalities such as visual, auditory, and kinesthetic interactions. Multimodal learning analytics (MMLA) with multiple biosensors provides a way to overcome the challenge of analyzing the multiple interaction types simultaneously. Galvanic skin response/electrodermal activity (GSR/EDA), eye tracking and facial expression were used to measure the learning interaction in a multimodal online learning environment. iMotions and R software were used to post-process and analyze the time-synchronized biosensor data. GSR/EDA, eye tracking and facial expression showed real-time cognitive, emotional, and visual learning engagement for each interaction type. There is a tremendous potential for using MMLA with multiple biosensors to understand learning engagement in a multimodal online learning environment was shown in this study.
Related Posts
-
Your Menu Is Your Most Powerful Marketing Asset
Consumer Insights
-
The Science of Resilience: Measuring the Ability to Bounce Back
Academia
-
Measuring Pain: Advancing The Understanding Of Pain Measurement Through Multimodal Assessment
Ergonomics
-
Feeling at Home: How to Design a Space Where the Brain can Relax
Ergonomics