Abstract: Accurately determining pain levels in children is difficult, even for trained professionals and parents. Facial activity and electrodermal activity (EDA) provide rich information about pain, and both have been used in automated pain detection. In this paper, we discuss preliminary steps towards fusing models trained on video and EDA features respectively. We demonstrate the benefit of the fusion with a special test case involving domain adaptation and improved accuracy relative to using EDA and video features alone.
Related Posts
-
Memory and Visual Attention: 5 Foundational Eye-Tracking Experiments
-
Converting Raw Eye-Tracking Data into Cognitive Load Indicators
-
Measuring Consumer Happiness: How Biometrics Can Help Assess What Products Make People Feel Good
-
Desire Before Delight: Why Wanting Drives Consumer Choice More Than Liking
