Evaluating Vision Language Models in Detecting Learning Engagement

With the advancement of both computer vision and natural language processing, there is growing interest in incorpo- rating Vision Language Models (VLMs) into the classroom to em- power students and educators. Despite the VLMs’ sophisticated abilities in context-aware emotion recognition, their effective- ness in detecting classroom-specific emotions, e.g., engagement, distraction, and absent-mindedness, remains underexplored. As such, this paper aims to investigate the capabilities of two state-of- the-art VLMs in this domain through an empirical study, focusing on two research questions: 1- Is learning engagement detection more challenging for VLMs compared to conventional emotion detection? 2- What are the key difficulties faced by VLMs in processing learning engagement detection tasks? To address these questions, we perform a series of evaluation experiments by utilizing a classroom behavior detection dataset and an emotion recognition dataset. We conclude that VLMs that perform well on basic emotion recognition struggle with in-context engagement detection, due to the nuanced and context-dependent nature of the task. Specifically, experiments show that VLMs have difficulty distinguishing engaged and distracted classroom behavior, e.g., reading versus bowing the head. It suggests that VLMs still have significant room for improvement in engagement analy- sis. This issue can potentially be addressed by incorporating more classroom-specific training data or commonsense reasoning frameworks.

Paper