Date
Publisher
arXiv
The rapid integration of Artificial Intelligence (AI) into K-12 STEM
education presents transformative opportunities alongside significant ethical
challenges. While AI-powered tools such as Intelligent Tutoring Systems (ITS),
automated assessments, and predictive analytics enhance personalized learning
and operational efficiency, they also risk perpetuating algorithmic bias,
eroding student privacy, and exacerbating educational inequities. This paper
examines the dual-edged impact of AI in STEM classrooms, analyzing its benefits
(e.g., adaptive learning, real-time feedback) and drawbacks (e.g., surveillance
risks, pedagogical limitations) through an ethical lens. We identify critical
gaps in current AI education research, particularly the lack of
subject-specific frameworks for responsible integration and propose a
three-phased implementation roadmap paired with a tiered professional
development model for educators. Our framework emphasizes equity-centered
design, combining technical AI literacy with ethical reasoning to foster
critical engagement among students. Key recommendations include mandatory bias
audits, low-resource adaptation strategies, and policy alignment to ensure AI
serves as a tool for inclusive, human-centered STEM education. By bridging
theory and practice, this work advances a research-backed approach to AI
integration that prioritizes pedagogical integrity, equity, and student agency
in an increasingly algorithmic world. Keywords: Artificial Intelligence, STEM
education, algorithmic bias, ethical AI, K-12 pedagogy, equity in education
What is the application?
Why use AI?
Study design
