Start Making Sense: Identifying Behavioural Indicators When Things Go Wrong During Interaction with Artificial Agents

Sara Dalzel-Job

Robin Hill

Ron Petrick

Abstract
This project looks at how people approach collaborative interactions with humans and virtual humans, particularly when encountering ambiguous or unexpected situations. The aim is to create natural and accurate models of users’ behaviors, incorporating social signals and indicators of psychological and physiological states (such as eye movements, galvanic skin response, facial expression and subjective perceptions of an interlocutor) under different conditions, with varying patterns of feedback. The findings from this study will allow artificial agents to be trained to understand characteristic human behaviour exhibited during communication, and how to respond to specific non-verbal cues and biometric feedback with appropriately human-like behaviour. Continuous monitoring of “success” during communication, rather than simply at the end, allows for a more fluid and agile interaction, ultimately reducing the likelihood of critical failure.

This publication uses Eye Tracking, Facial Expression Analysis and GSR which is fully integrated into iMotions Lab

Learn more