Emotions are context-specific and unfold dynamically during tasks such as programming. However, most emotion recognition systems are built using generic data and often overlook contextual nuances. This dissertation bridges affective computing and computing education by developing a context-specific, multi-modal dataset that integrates physiological, behavioral, and self-report data from programming students. Building on this dataset, I explore hybrid and transformer-based fusion techniques for emotion recognition and emotion-cause analysis. The goal is to inform the design of affect-aware learning systems that adapt to students’ learning needs to improve engagement and persistence in computing education.
Related Posts
-
How Biosensors Help Contextualize Type I and Type II Errors in Experimental Psychology Research
-
10 Areas Where Simulation Research Delivers Deep Behavioral Insight
-
Memory and Visual Attention: 5 Essential Eye-Tracking Experiments you should not miss
-
Converting Raw Eye-Tracking Data into Cognitive Load Indicators
