Date
Publisher
arXiv
The influence of Artificial Intelligence (AI), and specifically Large
Language Models (LLM), on education is continuously increasing. These models
are frequently used by students, giving rise to the question whether current
forms of assessment are still a valid way to evaluate student performance and
comprehension. The theoretical framework developed in this paper is grounded in
Constructive Alignment (CA) theory and Bloom's taxonomy for defining learning
objectives. We argue that AI influences learning objectives of different Bloom
levels in a different way, and assessment has to be adopted accordingly.
Furthermore, in line with Bloom's vision, formative and summative assessment
should be aligned on whether the use of AI is permitted or not.
Although lecturers tend to agree that education and assessment need to be
adapted to the presence of AI, a strong bias exists on the extent to which
lecturers want to allow for AI in assessment. This bias is caused by a
lecturer's familiarity with AI and specifically whether they use it themselves.
To avoid this bias, we propose structured guidelines on a university or faculty
level, to foster alignment among the staff. Besides that, we argue that
teaching staff should be trained on the capabilities and limitations of AI
tools. In this way, they are better able to adapt their assessment methods.
What is the application?
Who age?
Why use AI?
Study design
