Search and Filter

Submit a research study

Contribute to the repository:

Add a paper

Cograder: Transforming Instructors' Assessment Of Project Reports Through Collaborative Llm Integration

Authors
Zixin Chen,
Jiachen Wang,
Yumeng Li,
Haobo Li,
Chuhan Shi,
Rong Zhang,
Huamin Qu
Date
Publisher
arXiv
Grading project reports are increasingly significant in today's educational landscape, where they serve as key assessments of students' comprehensive problem-solving abilities. However, it remains challenging due to the multifaceted evaluation criteria involved, such as creativity and peer-comparative achievement. Meanwhile, instructors often struggle to maintain fairness throughout the time-consuming grading process. Recent advances in AI, particularly large language models, have demonstrated potential for automating simpler grading tasks, such as assessing quizzes or basic writing quality. However, these tools often fall short when it comes to complex metrics, like design innovation and the practical application of knowledge, that require an instructor's educational insights into the class situation. To address this challenge, we conducted a formative study with six instructors and developed CoGrader, which introduces a novel grading workflow combining human-LLM collaborative metrics design, benchmarking, and AI-assisted feedback. CoGrader was found effective in improving grading efficiency and consistency while providing reliable peer-comparative feedback to students. We also discuss design insights and ethical considerations for the development of human-AI collaborative grading systems.
What is the application?
Who is the user?
Who age?