Using computer-vision and machine learning to automate facial coding of positive and negative affect intensity

Abstract: Facial expressions are fundamental to interpersonal communication, including social interaction, and allow people of different ages, cultures, and languages to quickly and reliably convey emotional information. Historically, facial expression research has followed from discrete emotion theories, which posit a limited number of distinct affective states that are represented with specific patterns of facial action. Much less work has focused on dimensional features of emotion, particularly positive and negative affect intensity. This is likely, in part, because achieving inter-rater reliability for facial action and affect intensity ratings is painstaking and labor-intensive. We use computer-vision and machine learning (CVML) to identify patterns of facial actions in 4,648 video recordings of 125 human participants, which show strong correspondences to positive and negative affect intensity ratings obtained from highly trained coders. Our results show that CVML can both (1) determine the importance of different facial actions that human coders use to derive positive and negative affective ratings when combined with interpretable machine learning methods, and (2) efficiently automate positive and negative affect intensity coding on large facial expression databases. Further, we show that CVML can be applied to individual human judges to infer which facial actions they use to generate perceptual emotion ratings from facial expressions.

Learn more about the technologies used


Notice: Trying to access array offset on value of type null in /usr/www/users/publio/wp-content/themes/twentytwentytwo_child/template-parts/blocks/articles/publication-meta-panel.php on line 30

Notice: Trying to access array offset on value of type null in /usr/www/users/publio/wp-content/themes/twentytwentytwo_child/template-parts/blocks/articles/publication-meta-panel.php on line 31