Date
Publisher
arXiv
Providing individualized scaffolding for physics problem solving at scale
remains an instructional challenge. We investigate (1) students' perceptions of
a Socratic Artificial Intelligence (AI) chatbot's impact on problem-solving
skills and confidence and (2) how the specificity of students' questions during
tutoring relates to performance. We deployed a custom Socratic AI chatbot in a
large-enrollment introductory mechanics course at a Midwestern public
university, logging full dialogue transcripts from 150 first-year STEM majors.
Post-interaction surveys revealed median ratings of 4.0/5 for knowledge-based
skills and 3.4/5 for overall effectiveness. Transcript analysis showed question
specificity rose from approximately 10-15% in the first turn to 100% by the
final turn, and specificity correlated positively with self reported expected
course grade (Pearson r = 0.43). These findings demonstrate that AI-driven
Socratic dialogue not only fosters expert-like reasoning but also generates
fine-grained analytics for physics education research, establishing a scalable
dual-purpose tool for instruction and learning analytics.
What is the application?
Who is the user?
Who age?
Why use AI?
Study design
