Date
Publisher
arXiv
As generative AI products could generate code and assist students with
programming learning seamlessly, integrating AI into programming education
contexts has driven much attention. However, one emerging concern is that
students might get answers without learning from the LLM-generated content. In
this work, we deployed the LLM-powered personalized Parsons puzzles as
scaffolding to write-code practice in a Python learning classroom (PC
condition) and conducted an 80-minute randomized between-subjects study. Both
conditions received the same practice problems. The only difference was that
when requesting help, the control condition showed students a complete solution
(CC condition), simulating the most traditional LLM output. Results indicated
that students who received personalized Parsons puzzles as scaffolding engaged
in practicing significantly longer than those who received complete solutions
when struggling.
What is the application?
Who age?
Why use AI?
Study design