Date
Publisher
arXiv
Automated answer grading is a critical challenge in educational technology,
with the potential to streamline assessment processes, ensure grading
consistency, and provide timely feedback to students. However, existing
approaches are often constrained to specific exam formats, lack
interpretability in score assignment, and struggle with real-world
applicability across diverse subjects and assessment types. To address these
limitations, we introduce RATAS (Rubric Automated Tree-based Answer Scoring), a
novel framework that leverages state-of-the-art generative AI models for
rubric-based grading of textual responses. RATAS is designed to support a wide
range of grading rubrics, enable subject-agnostic evaluation, and generate
structured, explainable rationales for assigned scores. We formalize the
automatic grading task through a mathematical framework tailored to
rubric-based assessment and present an architecture capable of handling
complex, real-world exam structures. To rigorously evaluate our approach, we
construct a unique, contextualized dataset derived from real-world
project-based courses, encompassing diverse response formats and varying levels
of complexity. Empirical results demonstrate that RATAS achieves high
reliability and accuracy in automated grading while providing interpretable
feedback that enhances transparency for both students and nstructors.
What is the application?
Who is the user?
Who age?
Why use AI?
Study design
