Date
Publisher
arXiv
Feedback is one of the most crucial components to facilitate effective
learning. With the rise of large language models (LLMs) in recent years,
research in programming education has increasingly focused on automated
feedback generation to help teachers provide timely support to every student.
However, prior studies often overlook key pedagogical principles, such as
mastery and progress adaptation, that shape effective feedback strategies. This
paper introduces a novel pedagogical framework for LLM-driven feedback
generation derived from established feedback models and local insights from
secondary school teachers. To evaluate this framework, we implemented a
web-based application for Python programming with LLM-based feedback that
follows the framework and conducted a mixed-method evaluation with eight
secondary-school computer science teachers. Our findings suggest that teachers
consider that, when aligned with the framework, LLMs can effectively support
students and even outperform human teachers in certain scenarios through
instant and precise feedback. However, we also found several limitations, such
as its inability to adapt feedback to dynamic classroom contexts. Such a
limitation highlights the need to complement LLM-generated feedback with human
expertise to ensure effective student learning. This work demonstrates an
effective way to use LLMs for feedback while adhering to pedagogical standards
and highlights important considerations for future systems.
What is the application?
Who is the user?
Who age?
Why use AI?
Study design
