Date
Publisher
arXiv
We present an empirical study of how both experienced tutors and non-tutors
judge the correctness of tutor praise responses under different Artificial
Intelligence (AI)-assisted interfaces, types of explanation (textual
explanations vs. inline highlighting). We first fine-tuned several Large
Language Models (LLMs) to produce binary correctness labels and explanations,
achieving up to 88% accuracy and 0.92 F1 score with GPT-4. We then let the
GPT-4 models assist 95 participants in tutoring decision-making tasks by
offering different types of explanations. Our findings show that although
human-AI collaboration outperforms humans alone in evaluating tutor responses,
it remains less accurate than AI alone. Moreover, we find that non-tutors tend
to follow the AI's advice more consistently, which boosts their overall
accuracy on the task: especially when the AI is correct. In contrast,
experienced tutors often override the AI's correct suggestions and thus miss
out on potential gains from the AI's generally high baseline accuracy. Further
analysis reveals that explanations in text reasoning will increase
over-reliance and reduce underreliance, while inline highlighting does not.
Moreover, neither explanation style actually has a significant effect on
performance and costs participants more time to complete the task, instead of
saving time. Our findings reveal a tension between expertise, explanation
design, and efficiency in AI-assisted decision-making, highlighting the need
for balanced approaches that foster more effective human-AI collaboration.
What is the application?
Who is the user?
Who age?
Why use AI?
Study design
