Date
Publisher
arXiv
Formative assessment is a cornerstone of effective teaching and learning,
providing students with feedback to guide their learning. While there has been
an exponential growth in the application of generative AI in scaling various
aspects of formative assessment, ranging from automatic question generation to
intelligent tutoring systems and personalized feedback, few have directly
addressed the core pedagogical principles of formative assessment. Here, we
critically examined how generative AI, especially large-language models (LLMs)
such as ChatGPT, can support key components of formative assessment: helping
students, teachers, and peers understand "where learners are going," "where
learners currently are," and "how to move learners forward" in the learning
process. With the rapid emergence of new prompting techniques and LLM
capabilities, we also provide guiding principles for educators to effectively
leverage cost-free LLMs in formative assessments while remaining grounded in
pedagogical best practices. Furthermore, we reviewed the role of LLMs in
generating feedback, highlighting limitations in current evaluation metrics
that inadequately capture the nuances of formative feedback, such as
distinguishing feedback at the task, process, and self-regulatory levels.
Finally, we offer practical guidelines for educators and researchers, including
concrete classroom strategies and future directions such as developing robust
metrics to assess LLM-generated feedback, leveraging LLMs to overcome systemic
and cultural barriers to formative assessment, and designing AI-aware
assessment strategies that promote transferable skills while mitigating
overreliance on LLM-generated responses. By structuring the discussion within
an established formative assessment framework, this review provides a
comprehensive foundation for integrating LLMs into formative assessment in a
pedagogically informed manner.
What is the application?
Who is the user?
Why use AI?
Study design
