Personalization relies on user models– representations of the user’s competencies, preferences, and skills to adapt the system behavior to optimize interaction. But the anticipated gain in productivity is offset by the effort involved in collecting and maintaining said user model. This is particularly pronounced in systems like ALeA (Adaptive Learning Assistant, https://courses.voll-ki.fau.de/), where the learner models contain competency estimations for thousands of concepts among multiple dimensions– here Bloom’s learning levels. In this paper we present an exploratory study design that tries to determine whether close visual observation of learners can be used to elicit competency data automatically– a task human educators perform routinely when teaching small groups of learners and adaptive learning systems should be equipped to mimic– with the help of this study.
Related Posts
-
More Likes, More Tide? Insights into Award-winning Advertising with Affectiva’s Facial Coding
Consumer Insights
-
Why Dial Testing Alone Isn’t Enough in Media Testing — How to Build on It for Better Results
Consumer Insights
-
The Power of Emotional Engagement: Entertainment Content Testing with Affectiva’s Facial Expression Analysis
Consumer Insights
-
Tracking Emotional Engagement in Audience Measurement is Critical for Industry Success
Consumer Insights