Date
Publisher
arXiv
One-on-one tutoring is widely acknowledged as an effective instructional
method, conditioned on qualified tutors. However, the high demand for qualified
tutors remains a challenge, often necessitating the training of novice tutors
(i.e., trainees) to ensure effective tutoring. Research suggests that providing
timely explanatory feedback can facilitate the training process for trainees.
However, it presents challenges due to the time-consuming nature of assessing
trainee performance by human experts. Inspired by the recent advancements of
large language models (LLMs), our study employed the GPT-4 model to build an
explanatory feedback system. This system identifies trainees' responses in
binary form (i.e., correct/incorrect) and automatically provides template-based
feedback with responses appropriately rephrased by the GPT-4 model. We
conducted our study on 410 responses from trainees across three training
lessons: Giving Effective Praise, Reacting to Errors, and Determining What
Students Know. Our findings indicate that: 1) using a few-shot approach, the
GPT-4 model effectively identifies correct/incorrect trainees' responses from
three training lessons with an average F1 score of 0.84 and an AUC score of
0.85; and 2) using the few-shot approach, the GPT-4 model adeptly rephrases
incorrect trainees' responses into desired responses, achieving performance
comparable to that of human experts.
What is the application?
Who is the user?
Who age?
Why use AI?
