This study explores the effectiveness of student-AI collaborative feedback generation for enhancing student feedback authors' learning outcomes. Situated in an online graduate-level data science course, this research compares different learning designs engaging students in writing hints for incorrect programming assignment solutions. We conducted an experiment comparing three designs: Baseline (students independently writing hints), AI-assistance (students writing hints with on-demand access to hints for the incorrect solution generated by GPT-4), and AI-revision (students first writing hints independently, then revising them after viewing GPT-4 generated hints). Our findings suggest that the AI-revision approach, which encourages students to first attempt the task independently and then engage with AI-generated content, can lead to better learning outcomes compared to the AI-assistance approach, which offers students constant access to AI-generated hints. The AI-revision approach prompts students to critically evaluate AI-generated responses and refine their own hints, leading to deeper understanding. This work contributes to research exploring generative AI integration in educational settings, emphasizing the need for designs that promote active student engagement with AI tools.
The Impact of Student-AI Collaborative Feedback Generation on Learning Outcomes
Date
Publisher
OpenReview
Study design
Who is the user?
Who benefits?
What is the application?
Why use AI?