Date
Publisher
arXiv
Large language models (LLMs) have demonstrated significant potential as
educational tutoring agents, capable of tailoring hints, orchestrating lessons,
and grading with near-human finesse across various academic domains. However,
current LLM-based educational systems exhibit critical limitations in promoting
genuine critical thinking, failing on over one-third of multi-hop questions
with counterfactual premises, and remaining vulnerable to adversarial prompts
that trigger biased or factually incorrect responses. To address these gaps, we
propose EDU-Prompting, a novel multi-agent framework that bridges established
educational critical thinking theories with LLM agent design to generate
critical, bias-aware explanations while fostering diverse perspectives. Our
systematic evaluation across theoretical benchmarks and practical college-level
critical writing scenarios demonstrates that EDU-Prompting significantly
enhances both content truthfulness and logical soundness in AI-generated
educational responses. The framework's modular design enables seamless
integration into existing prompting frameworks and educational applications,
allowing practitioners to directly incorporate critical thinking catalysts that
promote analytical reasoning and introduce multiple perspectives without
requiring extensive system modifications.
What is the application?
Who is the user?
Who age?
Why use AI?
Study design
