Date
Publisher
arXiv
While large language models (LLMs) challenge conventional methods of teaching
and learning, they present an exciting opportunity to improve efficiency and
scale high-quality instruction. One promising application is the generation of
customized exams, tailored to specific course content. There has been
significant recent excitement on automatically generating questions using
artificial intelligence, but also comparatively little work evaluating the
psychometric quality of these items in real-world educational settings. Filling
this gap is an important step toward understanding generative AI's role in
effective test design. In this study, we introduce and evaluate an iterative
refinement strategy for question generation, repeatedly producing, assessing,
and improving questions through cycles of LLM-generated critique and revision.
We evaluate the quality of these AI-generated questions in a large-scale field
study involving 91 classes -- covering computer science, mathematics,
chemistry, and more -- in dozens of colleges across the United States,
comprising nearly 1700 students. Our analysis, based on item response theory
(IRT), suggests that for students in our sample the AI-generated questions
performed comparably to expert-created questions designed for standardized
exams. Our results illustrate the power of AI to make high-quality assessments
more readily available, benefiting both teachers and students.
What is the application?
Who is the user?
Who age?
Why use AI?
Study design
