Date
Publisher
arXiv
The rise of Human-AI Collaborative Learning (HAICL) is shifting education toward dialogue-centric paradigms, creating an urgent need for new assessment methods. Evaluating Self-Regulated Learning (SRL) in this context presents new challenges, as the limitations of conventional approaches become more apparent. Questionnaires remain interrupted, while the utility of non-interrupted metrics like clickstream data is diminishing as more learning activity occurs within the dialogue. This study therefore investigates whether the student-AI dialogue can serve as a valid, non-interrupted data source for SRL assessment. We analyzed 421 dialogue logs from 98 university students interacting with a generative AI (GenAI) learning partner. Using large language model embeddings and clustering, we identified 22 dialogue patterns and quantified each student's interaction as a profile of alignment scores, which were analyzed against their Online Self-Regulated Learning Questionnaire (OSLQ) scores. Findings revealed a significant positive association between proactive dialogue patterns (e.g., post-class knowledge integration) and overall SRL. Conversely, reactive patterns (e.g., foundational pre-class questions) were significantly and negatively associated with overall SRL and its sub-processes. A group comparison substantiated these results, with low-SRL students showing significantly higher alignment with reactive patterns than their high-SRL counterparts. This study proposed the Dialogue-Based Human-AI Self-Regulated Learning (DHASRL) framework, a practical methodology for embedding SRL assessment directly within the HAICL dialogue to enable real-time monitoring and scaffolding of student regulation.
What is the application?
Who is the user?
Who age?
Why use AI?
Study design
