Date
Publisher
arXiv
Large language models (LLMs) can now generate physics practice problems in
real time, yet the educational value of these items hinges on rapid, reliable
post-generation vetting. We investigated which automated checks are both
technically feasible and pedagogically meaningful when exercises are produced
on demand within a chatbot interface. A cohort of 34 introductory-physics
students generated and attempted 543 problems during exam preparation. Each
item was labeled by an expert on a wide range of quality attributes and
presented to the learners in pairs to record their preference. We then (i)
benchmarked three commodity LLMs as "judges" against the expert labels, (ii)
quantified which attributes predict student choice via random-forest models,
and (iii) triangulated these results with free-form exit surveys. Only a small
subset of the original rubric proved necessary to reliably address student
preferences either directly or by proxy. The study demonstrates that scalable
formative assessment does not require exhaustive scoring: a carefully curated
core of structural and learner-visible checks is sufficient to ensure both
technical soundness and user appeal. The findings provide a practical blueprint
for deploying real-time, AI-generated practice in physics and other
quantitative disciplines.
What is the application?
Who is the user?
Who age?
Why use AI?
