Date
Publisher
arXiv
In the age of AI-powered educational (AIED) innovation, evaluating the
developmental consequences of novel designs before they are exposed to students
has become both essential and challenging. Since such interventions may carry
irreversible effects, it is critical to anticipate not only potential benefits
but also possible harms. This study proposes a student development agent
framework based on large language models (LLMs), designed to simulate how
students with diverse characteristics may evolve under different educational
settings without administering them to real students. By validating the
approach through a case study on a multi-agent learning environment (MAIC), we
demonstrate that the agent's predictions align with real student outcomes in
non-cognitive developments. The results suggest that LLM-based simulations hold
promise for evaluating AIED innovations efficiently and ethically. Future
directions include enhancing profile structures, incorporating fine-tuned or
small task-specific models, validating effects of empirical findings,
interpreting simulated data and optimizing evaluation methods.
What is the application?
Who is the user?
Who age?
Why use AI?
Study design
