Date
Publisher
arXiv
Generative artificial intelligence (GenAI) tools such as OpenAI's ChatGPT are
transforming the educational landscape, prompting reconsideration of
traditional assessment practices. In parallel, universities are exploring
alternatives to in-person, closed-book examinations, raising concerns about
academic integrity and pedagogical alignment in uninvigilated settings. This
study investigates whether traditional closed-book mathematics examinations
retain their pedagogical relevance when hypothetically administered in
uninvigilated, open-book settings with GenAI access. Adopting an empirical
approach, we generate, transcribe, and blind-mark GenAI submissions to eight
undergraduate mathematics examinations at a Russell Group university, spanning
the entirety of the first-year curriculum. By combining independent GenAI
responses to individual questions, we enable a meaningful evaluation of GenAI
performance, both at the level of modules and across the first-year curriculum.
We find that GenAI attainment is at the level of a first-class degree, though
current performance can vary between modules. Further, we find that GenAI
performance is remarkably consistent when viewed across the entire curriculum,
significantly more so than that of students in invigilated examinations. Our
findings evidence the need for redesigning assessments in mathematics for
unsupervised settings, and highlight the potential reduction in pedagogical
value of current standards in the era of generative artificial intelligence.
What is the application?
Who is the user?
Who age?
Why use AI?
Study design
