Date
Publisher
arXiv
As design thinking education grows in secondary and tertiary contexts,
educators face the challenge of evaluating creative artefacts that combine
visual and textual elements. Traditional rubric-based assessment is laborious,
time-consuming, and inconsistent due to reliance on Teaching Assistants (TA) in
large, multi-section cohorts. This paper presents an exploratory study
investigating the reliability and perceived accuracy of AI-assisted assessment
compared to TA-assisted assessment in evaluating student posters in design
thinking education. Two activities were conducted with 33 Ministry of Education
(MOE) Singapore school teachers to (1) compare AI-generated scores with TA
grading across three key dimensions: empathy and user understanding,
identification of pain points and opportunities, and visual communication, and
(2) examine teacher preferences for AI-assigned, TA-assigned, and hybrid
scores. Results showed low statistical agreement between instructor and AI
scores for empathy and pain points, with slightly higher alignment for visual
communication. Teachers preferred TA-assigned scores in six of ten samples.
Qualitative feedback highlighted the potential of AI for formative feedback,
consistency, and student self-reflection, but raised concerns about its
limitations in capturing contextual nuance and creative insight. The study
underscores the need for hybrid assessment models that integrate computational
efficiency with human insights. This research contributes to the evolving
conversation on responsible AI adoption in creative disciplines, emphasizing
the balance between automation and human judgment for scalable and
pedagogically sound assessment.
What is the application?
Who is the user?
Who age?
Why use AI?
Study design
