Date
Publisher
arXiv
Feedback plays a central role in learning, yet pre-service teachers'
engagement with feedback depends not only on its quality but also on their
perception of the feedback content and source. Large Language Models (LLMs) are
increasingly used to provide educational feedback; however, negative
perceptions may limit their practical use, and little is known about how
pre-service teachers' perceptions and behavioral responses differ by feedback
source. This study investigates how the perceived source of feedback - LLM,
expert, or peer - influences feedback perception and uptake, and whether
recognition accuracy and feedback quality moderate these effects. In a
randomized experiment with 273 pre-service teachers, participants received
written feedback on a mathematics learning goal, identified its source, rated
feedback perceptions across five dimensions (fairness, usefulness, acceptance,
willingness to improve, positive and negative affect), and revised the learning
goal according to the feedback (i.e. feedback uptake). Results revealed that
LLM-generated feedback received the highest ratings in fairness and usefulness,
leading to the highest uptake (52%). Recognition accuracy significantly
moderated the effect of feedback source on perception, with particularly
positive evaluations when LLM feedback was falsely ascribed to experts.
Higher-quality feedback was consistently assigned to experts, indicating an
expertise heuristic in source judgments. Regression analysis showed that only
feedback quality significantly predicted feedback uptake. Findings highlight
the need to address source-related biases and promote feedback and AI literacy
in teacher education.
What is the application?
Who is the user?
Who age?
Why use AI?
Study design
