Date
Publisher
arXiv
Generative artificial intelligence (GenAI) holds great promise as a tool to
support personalized learning. Teachers need tools to efficiently and
effectively enhance content readability of educational texts so that they are
matched to individual students reading levels, while retaining key details.
Large Language Models (LLMs) show potential to fill this need, but previous
research notes multiple shortcomings in current approaches. In this study, we
introduced a generalized approach and metrics for the systematic evaluation of
the accuracy and consistency in which LLMs, prompting techniques, and a novel
multi-agent architecture to simplify sixty informational reading passages,
reducing each from the twelfth grade level down to the eighth, sixth, and
fourth grade levels. We calculated the degree to which each LLM and prompting
technique accurately achieved the targeted grade level for each passage,
percentage change in word count, and consistency in maintaining keywords and
key phrases (semantic similarity). One-sample t-tests and multiple regression
models revealed significant differences in the best performing LLM and prompt
technique for each of the four metrics. Both LLMs and prompting techniques
demonstrated variable utility in grade level accuracy and consistency of
keywords and key phrases when attempting to level content down to the fourth
grade reading level. These results demonstrate the promise of the application
of LLMs for efficient and precise automated text simplification, the
shortcomings of current models and prompting methods in attaining an ideal
balance across various evaluation criteria, and a generalizable method to
evaluate future systems.
What is the application?
Who is the user?
Why use AI?
Study design