Date
Publisher
arXiv
In early childhood education, accurately detecting behavioral and
collaborative engagement is essential for fostering meaningful learning
experiences. This paper presents an AI-driven approach that leverages Vision
Transformers (ViTs) to automatically classify children's engagement using
visual cues such as gaze direction, interaction, and peer collaboration.
Utilizing the Child-Play gaze dataset, our method is trained on annotated video
segments to classify behavioral and collaborative engagement states (e.g.,
engaged, not engaged, collaborative, not collaborative). We evaluated three
state-of-the-art transformer models: Vision Transformer (ViT), Data-efficient
Image Transformer (DeiT), and Swin Transformer. Among these, the Swin
Transformer achieved the highest classification performance with an accuracy of
97.58%, demonstrating its effectiveness in modeling local and global attention.
Our results highlight the potential of transformer-based architectures for
scalable, automated engagement analysis in real-world educational settings.
What is the application?
Who is the user?
Who age?
Why use AI?
Study design
