Abstract: While labor issues and quality assurance in crowdwork are increasingly studied, how annotators make sense of texts and how they are personally impacted by doing so are not. We study these questions via a narrative-sorting annotation task, where carefully selected (by sequentiality, topic, emotional content, and length) collections of tweets serve as examples of everyday storytelling. As readers process these narratives, we measure their facial expressions, galvanic skin response, and selfreported reactions. From the perspective of annotator well-being, a reassuring outcome was that the sorting task did not cause a measurable stress response, however readers reacted to humor. In terms of sensemaking, readers were more confident when sorting sequential, target-topical, and highly emotional tweets. As crowdsourcing becomes more common, this research sheds light onto the perceptive capabilities and emotional impact of human readers.
Related Posts
-
The Science of Resilience: Measuring the Ability to Bounce Back
Academia
-
Measuring Pain: Advancing The Understanding Of Pain Measurement Through Multimodal Assessment
Ergonomics
-
Feeling at Home: How to Design a Space Where the Brain can Relax
Ergonomics
-
More Likes, More Tide? Insights into Award-winning Advertising with Affectiva’s Facial Coding
Consumer Insights