Search and Filter

Submit a research study

Contribute to the repository:

Add a paper

The Aftermath Of Drawedumath: Vision Language Models Underperform With Struggling Students And Misdiagnose Errors

Authors
Li Lucy,
Albert Zhang,
Nathan Anderson,
Ryan Knight,
Kyle Lo
Date
Publisher
arXiv
Effective mathematics education requires identifying and responding to students' mistakes. For AI to support pedagogical applications, models must perform well across different levels of student proficiency. Our work provides an extensive, year-long snapshot of how 11 vision-language models (VLMs) perform on DrawEduMath, a QA benchmark involving real students' handwritten, hand-drawn responses to math problems. We find that models' weaknesses concentrate on a core component of math education: student error. All evaluated VLMs underperform when describing work from students who require more pedagogical help, and across all QA, they struggle the most on questions related to assessing student error. Thus, while VLMs may be optimized to be math problem solving experts, our results suggest that they require alternative development incentives to adequately support educational use cases.
Why use AI?