Date
Publisher
arXiv
Multimodal large language models have demonstrated remarkable reasoning
capabilities in various visual tasks. However, their abilities in K12 scenarios
are still systematically underexplored. Previous studies suffer from various
limitations including narrow subject coverage, insufficient data scale, lack of
diversity in question types, and naive answer-centric evaluation method,
resulting in insufficient exploration of model capabilities. To address these
gaps, we propose K12Vista, the most comprehensive multimodal benchmark for
Chinese K12 subject knowledge understanding and reasoning to date, featuring
33,000 questions across five core subjects from primary to high school and
three question types. Moreover, beyond the final outcome, we are also concerned
with the correctness of MLLMs' reasoning processes. For this purpose, we
meticulously compiles errors from MLLMs' reasoning processes and leverage an
automated data pipeline to construct K12-PEM-800K, the largest process
evaluation dataset offering detailed step-by-step judgement annotations for
MLLMs' reasoning. Subsequently, we developed K12-PEM, an advanced process
evaluation model that integrates an overall assessment of both the reasoning
process and answer correctness. Moreover, we also introduce K12-PEBench, the
first high-quality, human-annotated benchmark specifically designed for
evaluating abilities of reasoning process evaluation.Extensive experiments
reveal that current MLLMs exhibit significant flaws when reasoning within
K12Vista, providing critical insights for the development of more capable
MLLMs.We open our resources at https://github.com/lichongod/K12Vista.
What is the application?
Why use AI?
Study design
