Date
Publisher
arXiv
The rise of generative large language models (LLMs) has opened new
opportunities for automating knowledge representation through concept maps, a
long-standing pedagogical tool valued for fostering meaningful learning and
higher-order thinking. Traditional construction of concept maps is
labor-intensive, requiring significant expertise and time, limiting their
scalability in education. This review systematically synthesizes the emerging
body of research on LLM-enabled concept map generation, focusing on two guiding
questions: (a) What methods and technical features of LLMs are employed to
construct concept maps? (b) What empirical evidence exists to validate their
educational utility? Through a comprehensive search across major databases and
AI-in-education conference proceedings, 28 studies meeting rigorous inclusion
criteria were analyzed using thematic synthesis. Findings reveal six major
methodological categories: human-in-the-loop systems, weakly supervised
learning models, fine-tuned domain-specific LLMs, pre-trained LLMs with prompt
engineering, hybrid systems integrating knowledge bases, and modular frameworks
combining symbolic and statistical tools. Validation strategies ranged from
quantitative metrics (precision, recall, F1-score, semantic similarity) to
qualitative evaluations (expert review, learner feedback). Results indicate
LLM-generated maps hold promise for scalable, adaptive, and pedagogically
relevant knowledge visualization, though challenges remain regarding validity,
interpretability, multilingual adaptability, and classroom integration. Future
research should prioritize interdisciplinary co-design, empirical classroom
trials, and alignment with instructional practices to realize their full
educational potential.
What is the application?
Who age?
Why use AI?
Study design
