Date
Publisher
arXiv
This study with 40 high-school students demonstrates the high influence of a
social educational robot on students' decision-making for a set of eight
true-false questions on electric circuits, for which the theory had been
covered in the students' courses. The robot argued for the correct answer on
six questions and the wrong on two, and 75% of the students were persuaded by
the robot to perform beyond their expected capacity, positively when the robot
was correct and negatively when it was wrong. Students with more experience of
using large language models were even more likely to be influenced by the
robot's stance -- in particular for the two easiest questions on which the
robot was wrong -- suggesting that familiarity with AI can increase
susceptibility to misinformation by AI.
We further examined how three different levels of portrayed robot certainty,
displayed using semantics, prosody and facial signals, affected how the
students aligned with the robot's answer on specific questions and how
convincing they perceived the robot to be on these questions. The students
aligned with the robot's answers in 94.4% of the cases when the robot was
portrayed as Certain, 82.6% when it was Neutral and 71.4% when it was
Uncertain. The alignment was thus high for all conditions, highlighting
students' general susceptibility to accept the robot's stance, but alignment in
the Uncertain condition was significantly lower than in the Certain. Post-test
questionnaire answers further show that students found the robot most
convincing when it was portrayed as Certain. These findings highlight the need
for educational robots to adjust their display of certainty based on the
reliability of the information they convey, to promote students' critical
thinking and reduce undue influence.
What is the application?
Who is the user?
Who age?
Why use AI?
Study design
