Date
Publisher
arXiv
This paper presents a theoretical framework for addressing the challenges
posed by generative artificial intelligence (AI) in higher education assessment
through a machine-versus-machine approach. Large language models like GPT-4,
Claude, and Llama increasingly demonstrate the ability to produce sophisticated
academic content, traditional assessment methods face an existential threat,
with surveys indicating 74-92% of students experimenting with these tools for
academic purposes. Current responses, ranging from detection software to manual
assessment redesign, show significant limitations: detection tools demonstrate
bias against non-native English writers and can be easily circumvented, while
manual frameworks rely heavily on subjective judgment and assume static AI
capabilities. This paper introduces a dual strategy paradigm combining static
analysis and dynamic testing to create a comprehensive theoretical framework
for assessment vulnerability evaluation. The static analysis component
comprises eight theoretically justified elements: specificity and
contextualization, temporal relevance, process visibility requirements,
personalization elements, resource accessibility, multimodal integration,
ethical reasoning requirements, and collaborative elements. Each element
addresses specific limitations in generative AI capabilities, creating barriers
that distinguish authentic human learning from AI-generated simulation. The
dynamic testing component provides a complementary approach through
simulation-based vulnerability assessment, addressing limitations in
pattern-based analysis. The paper presents a theoretical framework for
vulnerability scoring, including the conceptual basis for quantitative
assessment, weighting frameworks, and threshold determination theory.
What is the application?
Who is the user?
Who age?
Why use AI?
Study design
