Date
Publisher
arXiv
Generative artificial intelligence (AI) systems can now reliably solve many standard tasks used in introductory physics courses, producing correct equations, graphs, and explanations. While this capability is often framed as an opportunity for efficiency or personalization, it also poses a subtle ethical and educational risk: students may increasingly submit correct results without engaging in the epistemic practices that define learning physics. This challenge has recently been described as the "boiling frog problem" because we may not fully recognize how rapidly AI capabilities are advancing and fail to respond with commensurate urgency. In this article, we argue that the central challenge of AI in physics education is not cheating or tool selection, but instructional design. Drawing on research on self-regulated learning, cognitive load, multiple representations, and hybrid intelligence, we propose a practical framework for cognitively activated learning activities that structures student activities before, during, and after AI use. Using an example from an introductory kinematics laboratory, we show how AI can be integrated in ways that preserve prediction, interpretation, and evaluation as core learning activities. Rather than treating AI as an answer-generating tool, the framework positions AI as an epistemic partner whose contributions are deliberately bounded and reflected upon.
What is the application?
Who is the user?
Who age?
Why use AI?
Study design
