Date
Publisher
arXiv
We evaluate the effectiveness of Large Language Models (LLMs) in assessing
essay quality, focusing on their alignment with human grading. More precisely,
we evaluate ChatGPT and Llama in the Automated Essay Scoring (AES) task, a
crucial natural language processing (NLP) application in Education. We consider
both zero-shot and few-shot learning and different prompting approaches. We
compare the numeric grade provided by the LLMs to human rater-provided scores
utilizing the ASAP dataset, a well-known benchmark for the AES task. Our
research reveals that both LLMs generally assign lower scores compared to those
provided by the human raters; moreover, those scores do not correlate well with
those provided by the humans. In particular, ChatGPT tends to be harsher and
further misaligned with human evaluations than Llama. We also experiment with a
number of essay features commonly used by previous AES methods, related to
length, usage of connectives and transition words, and readability metrics,
including the number of spelling and grammar mistakes. We find that, generally,
none of these features correlates strongly with human or LLM scores. Finally,
we report results on Llama 3, which are generally better across the board, as
expected. Overall, while LLMs do not seem an adequate replacement for human
grading, our results are somewhat encouraging for their use as a tool to assist
humans in the grading of written essays in the future.
What is the application?
Who is the user?
Who age?
Why use AI?
Study design
