Abstract: Accurately determining pain levels in children is difficult, even for trained professionals and parents. Facial activity and electrodermal activity (EDA) provide rich information about pain, and both have been used in automated pain detection. In this paper, we discuss preliminary steps towards fusing models trained on video and EDA features respectively. We demonstrate the benefit of the fusion with a special test case involving domain adaptation and improved accuracy relative to using EDA and video features alone.
Related Posts
-
How Biosensors Help Contextualize Type I and Type II Errors in Experimental Psychology Research
-
10 Areas Where Simulation Research Delivers Deep Behavioral Insight
-
Multiface Analysis in Action: Advanced Methods for Studying Facial Expressions in Group Settings
-
Memory and Visual Attention: 5 Essential Eye-Tracking Experiments you should not miss
