Emotions are context-specific and unfold dynamically during tasks such as programming. However, most emotion recognition systems are built using generic data and often overlook contextual nuances. This dissertation bridges affective computing and computing education by developing a context-specific, multi-modal dataset that integrates physiological, behavioral, and self-report data from programming students. Building on this dataset, I explore hybrid and transformer-based fusion techniques for emotion recognition and emotion-cause analysis. The goal is to inform the design of affect-aware learning systems that adapt to students’ learning needs to improve engagement and persistence in computing education.
Related Posts
-
Measuring Pain: Advancing The Understanding Of Pain Measurement Through Multimodal Assessment
Ergonomics
-
Feeling at Home: How to Design a Space Where the Brain can Relax
Ergonomics
-
Why Dial Testing Alone Isn’t Enough in Media Testing — How to Build on It for Better Results
Consumer Insights
-
Tracking Emotional Engagement in Audience Measurement is Critical for Industry Success
Consumer Insights