Date
Publisher
arXiv
The capabilities of large language models (LLMs) have been applied in expert
systems across various domains, providing new opportunities for AI in
Education. Educational interactions involve a cyclical exchange between
teachers and students. Current research predominantly focuses on using LLMs to
simulate teachers, leveraging their expertise to enhance student learning
outcomes. However, the simulation of students, which could improve teachers'
instructional skills, has received insufficient attention due to the challenges
of modeling and evaluating virtual students. This research asks: Can LLMs be
utilized to develop virtual student agents that mimic human-like behavior and
individual variability? Unlike expert systems focusing on knowledge delivery,
virtual students must replicate learning difficulties, emotional responses, and
linguistic uncertainties. These traits present significant challenges in both
modeling and evaluation. To address these issues, this study focuses on
language learning as a context for modeling virtual student agents. We propose
a novel AI4Education framework, called SOE (Scene-Object-Evaluation), to
systematically construct LVSA (LLM-based Virtual Student Agents). By curating a
dataset of personalized teacher-student interactions with various personality
traits, question types, and learning stages, and fine-tuning LLMs using LoRA,
we conduct multi-dimensional evaluation experiments. Specifically, we: (1)
develop a theoretical framework for generating LVSA; (2) integrate human
subjective evaluation metrics into GPT-4 assessments, demonstrating a strong
correlation between human evaluators and GPT-4 in judging LVSA authenticity;
and (3) validate that LLMs can generate human-like, personalized virtual
student agents in educational contexts, laying a foundation for future
applications in pre-service teacher training and multi-agent simulation
environments.
What is the application?
Who is the user?
Who age?
Study design
