It’s not only what is said, but how: how user-expressed emotions predict satisfaction with voice assistants in different contexts

John Vara Prasad Ravi

Jan-Hinrich Meyer

Ramon Palau-Saumell

Divya Seernani

Purpose

Voice assistants (VAs) have reshaped customer service by offering new interaction channels. This study explores how user-expressed emotions during interactions with multimodal and voice-only devices across different contexts affect satisfaction. Capturing user emotions via voice tone and speech content analysis, we show that both device type and usage context are crucial in shaping user emotions and satisfaction.

Design/methodology/approach

In three laboratory experiments (n1 = 97; n2 = 97; n3 = 109) participants interacted with different device types in various contexts. The first and second experiments investigate task valence and complexity; the third explores the role of device anthropomorphism in eliciting consumer emotions and satisfaction.

Findings

User satisfaction is contingent on both device type and usage context. Different device types are better suited for different tasks and usage contexts. The emotions which the users expressed via voice tone and speech content can explain the differences and should be considered when seeking to improve the user experience.

Originality/value

This study proposes an innovative, objective way to assess VA users’ emotions holistically via voice and content, contributing to a better understanding of their role in enhancing or hindering the satisfaction of VA users.

This publication uses Voice Analysis which is fully integrated into iMotions Lab

Learn more

Learn more about the technologies used

Other publications you might be interested in