Date
Publisher
arXiv
This study explores how generative artificial intelligence, specifically
ChatGPT, can assist in the evaluation of laboratory reports in Experimental
Physics. Two interaction modalities were implemented: an automated API-based
evaluation and a customized ChatGPT configuration designed to emulate
instructor feedback. The analysis focused on two complementary
dimensions-formal and structural integrity, and technical accuracy and
conceptual depth. Findings indicate that ChatGPT provides consistent feedback
on organization, clarity, and adherence to scientific conventions, while its
evaluation of technical reasoning and interpretation of experimental data
remains less reliable. Each modality exhibited distinctive limitations,
particularly in processing graphical and mathematical information. The study
contributes to understanding how the use of AI in evaluating laboratory reports
can inform feedback practices in experimental physics, highlighting the
importance of teacher supervision to ensure the validity of physical reasoning
and the accurate interpretation of experimental results.
What is the application?
Who is the user?
Who age?
Why use AI?
Study design
