Date
Publisher
arXiv
The rising use of educational tools controlled by artificial intelligence
(AI) has provoked a debate about their proficiency. While intrinsic
proficiency, especially in tasks such as grading, has been measured and studied
extensively, perceived proficiency remains underexplored. Here it is shown
through Monte Carlo multi-agent simulations that trust networks among students
influence their perceptions of the proficiency of an AI tool. A probabilistic
opinion dynamics model is constructed, in which every student's perceptions are
described by a probability density function (PDF), which is updated at every
time step through independent, personal observations and peer pressure shaped
by trust relationships. It is found that students infer correctly the AI tool's
proficiency $\theta_{\rm AI}$ in allies-only networks (i.e.\ high trust
networks). AI-avoiders reach asymptotic learning faster than AI-users, and the
asymptotic learning time for AI-users decreases as their number increases.
However, asymptotic learning is disrupted even by a single partisan, who is
stubbornly incorrect in their belief $\theta_{\rm p} \neq \theta_{\rm AI}$,
making other students' beliefs vacillate indefinitely between $\theta_{\rm p}$
and $\theta_{\rm AI}$. In opponents-only (low trust) networks, all students
reach asymptotic learning, but only a minority infer $\theta_{\rm AI}$
correctly. AI-users have a small advantage over AI-avoiders in reaching the
right conclusion. In mixed networks, students may exhibit turbulent
nonconvergence and intermittency, or achieve asymptotic learning, depending on
the relationships between partisans and AI-users. The educational implications
of the results are discussed briefly in the context of designing robust usage
policies for AI tools, with an emphasis on the unintended and inequitable
consequences which arise sometimes from counterintuitive network effects.
What is the application?
Who age?
Why use AI?
Study design
