Multimodal online learning environment improves learning experience through different modalities such as visual, auditory, and kinesthetic interactions. Multimodal learning analytics (MMLA) with multiple biosensors provides a way to overcome the challenge of analyzing the multiple interaction types simultaneously. Galvanic skin response/electrodermal activity (GSR/EDA), eye tracking and facial expression were used to measure the learning interaction in a multimodal online learning environment. iMotions and R software were used to post-process and analyze the time-synchronized biosensor data. GSR/EDA, eye tracking and facial expression showed real-time cognitive, emotional, and visual learning engagement for each interaction type. There is a tremendous potential for using MMLA with multiple biosensors to understand learning engagement in a multimodal online learning environment was shown in this study.
Scientific Publications from Researchers Using iMotions
iMotion is used for some of the most interesting human behavior research studies done by top researchers around the world. Contact us to have your publication featured here.All Publications