Date
Publisher
arXiv
In this paper, we present a novel approach for distilling math word problem
solving capabilities from large language models (LLMs) into smaller, more
efficient student models. Our approach is designed to consider the student
model's weaknesses and foster a tailored learning experience by generating
targeted exercises aligned with educational science principles, such as
knowledge tracing and personalized learning. Concretely, we let GPT-3 be a math
tutor and run two steps iteratively: 1) assessing the student model's current
learning status on a GPT-generated exercise book, and 2) improving the student
model by training it with tailored exercise samples generated by GPT-3.
Experimental results reveal that our approach outperforms LLMs (e.g., GPT-3 and
PaLM) in accuracy across three distinct benchmarks while employing
significantly fewer parameters. Furthermore, we provide a comprehensive
analysis of the various components within our methodology to substantiate their
efficacy.
What is the application?
Why use AI?
Study design
