Search and Filter

Submit a research study

Contribute to the repository:

Add a paper

Improve LLM-based Automatic Essay Scoring with Linguistic Features

Authors
Zhaoyi Joey Hou,
Alejandro Ciuba,
Xiang Lorraine Li
Date
Publisher
arXiv
Automatic Essay Scoring (AES) assigns scores to student essays, reducing the grading workload for instructors. Developing a scoring system capable of handling essays across diverse prompts is challenging due to the flexibility and diverse nature of the writing task. Existing methods typically fall into two categories: supervised feature-based approaches and large language model (LLM)-based methods. Supervised feature-based approaches often achieve higher performance but require resource-intensive training. In contrast, LLM-based methods are computationally efficient during inference but tend to suffer from lower performance. This paper combines these approaches by incorporating linguistic features into LLM-based scoring. Experimental results show that this hybrid method outperforms baseline models for both in-domain and out-of-domain writing prompts.
What is the application?
Who is the user?