Date
Publisher
arXiv
Generative Artificial Intelligence (GenAI) is reshaping higher education and
raising pressing concerns about the integrity and validity of higher education
assessment. While assessment redesign is increasingly seen as a necessity,
there is a relative lack of literature detailing what such redesign may entail.
In this paper, we introduce assessment twins as an accessible approach for
redesigning assessment tasks to enhance validity. We use Messick's unified
validity framework to systematically map the ways in which GenAI threaten
content, structural, consequential, generalisability, and external validity.
Following this, we define assessment twins as two deliberately linked
components that address the same learning outcomes through different modes of
evidence, scheduled closely together to allow for cross-verification and
assurance of learning.
We argue that the twin approach helps mitigate validity threats by
triangulating evidence across complementary formats, such as pairing essays
with oral defences, group discussions, or practical demonstrations. We
highlight several advantages: preservation of established assessment formats,
reduction of reliance on surveillance technologies, and flexible use across
cohort sizes. To guide implementation, we propose a three-step design process:
identifying vulnerabilities, aligning outcomes, selecting complementary tasks,
and developing interdependent marking schemes. We also acknowledge the
challenges, including resource intensity, equity concerns, and the need for
empirical validation. Nonetheless, we contend that assessment twins represent a
validity-focused response to GenAI that prioritises pedagogy while supporting
meaningful student learning outcomes.
What is the application?
Who is the user?
Who age?
Why use AI?
