Home Alone: Social Robots for Digital Ethnography of Toddler Behavior

Mohsen Malmir

Deborah Forster

Kendall Youngstrom

Lydia Morrison

Javier R. Movellan

Abstract: An unprecedented number of children in the US start public school with major deficits in basic academic skills. Scientific evidence shows that children who have early failure experiences in school are those who are most likely to become inattentive, disruptive, or withdrawn later on. Empirical research using longitudinal randomized control studies is now showing that early childhood education programs can effectively prevent academic deficits. However, due to their high costs, such programs may not find widespread use. Thus it is critical to find innovative ways to gather and analyze data on early childhood education so as to better understand toddler’s behavior and to conduct rapid, big-data experiments.

As part of this overall vision, the RUBI project started back in 2004 with the goal of studying the potential of social robot technologies in early childhood education. Since then 5 different robot prototypes have been developed and immersed on an early childhood education center for sustained periods of time. The early prototypes (RUBI-l, 2) were remotely operated by humans. RUBI-3 was a transitional design. RUBI-4 was the first prototype to operate autonomously for a period of 15 days. RUBI-4 provided useful data about toddler behavior. In particular it was shown that a 2-week period of interaction with RUBI-4 resulted in improvement of vocabulary skills in 18-24 month olds. However many of the results found with RUBI-4 required for human ethnographers to analyze hundreds of hours of video, a process that was both slow and costly. The latest prototype (RUBI-5) was designed to operate as an autonomous “digital ethnographer” that would embed itself on the daily routine of the toddlers life and enrich their environment while gathering and analyzing the observed behaviors.

This publication uses Facial Expression Analysis which is fully integrated into iMotions Lab

Learn more