Date
Publisher
arXiv
Accurately assessing internal human states is key to understanding
preferences, offering personalized services, and identifying challenges in
real-world applications. Originating from psychometrics, adaptive testing has
become the mainstream method for human measurement and has now been widely
applied in education, healthcare, sports, and sociology. It customizes
assessments by selecting the fewest test questions . However, current adaptive
testing methods face several challenges. The mechanized nature of most
algorithms leads to guessing behavior and difficulties with open-ended
questions. Additionally, subjective assessments suffer from noisy response data
and coarse-grained test outputs, further limiting their effectiveness. To move
closer to an ideal adaptive testing process, we propose TestAgent, a large
language model (LLM)-powered agent designed to enhance adaptive testing through
interactive engagement. This is the first application of LLMs in adaptive
testing. TestAgent supports personalized question selection, captures
test-takers' responses and anomalies, and provides precise outcomes through
dynamic, conversational interactions. Experiments on psychological,
educational, and lifestyle assessments show our approach achieves more accurate
results with 20% fewer questions than state-of-the-art baselines, and testers
preferred it in speed, smoothness, and other dimensions.
What is the application?
Who is the user?
Why use AI?
Study design
