Date
Publisher
arXiv
The growing need for automated and personalized feedback in programming
education has led to recent interest in leveraging generative AI for feedback
generation. However, current approaches tend to rely on prompt engineering
techniques in which predefined prompts guide the AI to generate feedback. This
can result in rigid and constrained responses that fail to accommodate the
diverse needs of students and do not reflect the style of human-written
feedback from tutors or peers. In this study, we explore learnersourcing as a
means to fine-tune language models for generating feedback that is more similar
to that written by humans, particularly peer students. Specifically, we asked
students to act in the flipped role of a tutor and write feedback on programs
containing bugs. We collected approximately 1,900 instances of student-written
feedback on multiple programming problems and buggy programs. To establish a
baseline for comparison, we analyzed a sample of 300 instances based on
correctness, length, and how the bugs are described. Using this data, we
fine-tuned open-access generative models, specifically Llama3 and Phi3. Our
findings indicate that fine-tuning models on learnersourced data not only
produces feedback that better matches the style of feedback written by
students, but also improves accuracy compared to feedback generated through
prompt engineering alone, even though some student-written feedback is
incorrect. This surprising finding highlights the potential of student-centered
fine-tuning to improve automated feedback systems in programming education.
What is the application?
Who age?
Why use AI?
Study design
