Search and Filter

Submit a research study

Contribute to the repository:

Add a paper

Visual Reasoning Benchmark: Evaluating Multimodal Llms On Classroom-Authentic Visual Problems From Primary Education

Authors
Mohamed Huti,
Alasdair Mackintosh,
Amy Waldock,
Dominic Andrews,
Maxime Leliievre,
Moritz Boos,
Tobias Murray,
Paul Atherton,
Robin A. A. Ince,
Oliver G. B. Garrod
Date
Publisher
arXiv
AI models have achieved state-of-the-art results in textual reasoning; however, their ability to reason over spatial and relational structures remains a critical bottleneck -- particularly in early-grade maths, which relies heavily on visuals. This paper introduces the visual reasoning benchmark (VRB), a novel dataset designed to evaluate Multimodal Large Language Models (MLLMs) on their ability to solve authentic visual problems from classrooms. This benchmark is built on a set of 701 questions sourced from primary school examinations in Zambia and India, which cover a range of tasks such as reasoning by analogy, pattern completion, and spatial matching. We outline the methodology and development of the benchmark which intentionally uses unedited, minimal-text images to test if models can meet realistic needs of primary education. Our findings reveal a ``jagged frontier'' of capability where models demonstrate better proficiency in static skills such as counting and scaling, but reach a distinct ``spatial ceiling'' when faced with dynamic operations like folding, reflection, and rotation. These weaknesses pose a risk for classroom use on visual reasoning problems, with the potential for incorrect marking, false scaffolding, and reinforcing student misconceptions. Consequently, education-focused benchmarks like the VRB are essential for determining the functional boundaries of multimodal tools used in classrooms.
Why use AI?