Search and Filter

Submit a research study

Contribute to the repository:

Add a paper

Are LLMs Ready for English Standardized Tests? A Benchmarking and Elicitation Perspective

Authors
Luoxi Tang,
Tharunya Sundar,
Shuai Yang,
Ankita Patra,
Manohar Chippada,
Giqi Zhao,
Yi Li,
Riteng Zhang,
Tunan Zhao,
Ting Yang,
Yuqiao Meng,
Weicheng Ma,
Zhaohan Xi
Date
Publisher
arXiv
AI is transforming education by enabling powerful tools that enhance learning experiences. Among recent advancements, large language models (LLMs) hold particular promise for revolutionizing how learners interact with educational content. In this work, we investigate the potential of LLMs to support standardized test preparation by focusing on English Standardized Tests (ESTs). Specifically, we assess their ability to generate accurate and contextually appropriate solutions across a diverse set of EST question types. We introduce ESTBOOK, a comprehensive benchmark designed to evaluate the capabilities of LLMs in solving EST questions. ESTBOOK aggregates five widely recognized tests, encompassing 29 question types and over 10,576 questions across multiple modalities, including text, images, audio, tables, and mathematical symbols. Using ESTBOOK, we systematically evaluate both the accuracy and inference efficiency of LLMs. Additionally, we propose a breakdown analysis framework that decomposes complex EST questions into task-specific solution steps. This framework allows us to isolate and assess LLM performance at each stage of the reasoning process. Evaluation findings offer insights into the capability of LLMs in educational contexts and point toward targeted strategies for improving their reliability as intelligent tutoring systems.
What is the application?
Who is the user?