Search and Filter

Submit a research study

Contribute to the repository:

Add a paper

Multimodal Assessment of Classroom Discourse Quality: A Text-Centered Attention-Based Multi-Task Learning Approach

Authors
Ruikun Hou,
Babette BŸhler,
Tim FŸtterer,
Efe Bozkir,
Peter Gerjets,
Ulrich Trautwein,
Enkelejda Kasneci
Date
Publisher
arXiv
Classroom discourse is an essential vehicle through which teaching and learning take place. Assessing different characteristics of discursive practices and linking them to student learning achievement enhances the understanding of teaching quality. Traditional assessments rely on manual coding of classroom observation protocols, which is timeconsuming and costly. Despite many studies utilizing AI techniques to analyze classroom discourse at the utterance level, investigations into the evaluation of discursive practices throughout an entire lesson segment remain limited. Existing discourse assessment approaches primarily depend on transcript-based analyses, neglecting non-verbal modalities crucial for comprehensive evaluation. To address this gap, our study proposes a novel text-centered multimodal fusion architecture to assess the quality of three discourse components grounded in the Global Teaching InSights (GTI) observation protocol: Nature of Discourse, Questioning, and Explanations. First, we employ attention mechanisms to capture interand intra-modal interactions from transcript, audio, and video streams. Second, a multi-task learning approach is adopted to jointly predict the quality scores of the three components. Third, we formulate the task as an ordinal classification problem to account for rating level order. The effectiveness of these designed elements is demonstrated through an ablation study on the GTI Germany dataset containing 92 videotaped math lessons. Our results highlight the dominant role of text modality in approaching this task. Integrating acoustic features enhances the model 's consistency with human ratings, achieving an overall Quadratic Weighted Kappa score of 0.384, comparable to human interrater reliability (0.326). Furthermore, correlation analyses between predicted ratings and student outcomes (i.e., test scores, interest, and self-efficacy) reveal partial alignment with those derived from human ratings. Our study lays the groundwork for the future development of automated discourse quality assessment to support teacher professional development through timely feedback on multidimensional discourse practices.
What is the application?