Date
Publisher
arXiv
Since the advent of GPT-3.5 in 2022, Generative Artificial Intelligence (AI)
has shown tremendous potential in STEM education, particularly in providing
real-time, customized feedback to students in large-enrollment courses. A
crucial skill that mediates effective use of AI is the systematic structuring
of natural language instructions to AI models, commonly referred to as prompt
engineering. This study has three objectives: (i) to investigate the
sophistication of student-generated prompts when seeking feedback from AI on
their arguments, (ii) to examine the features that students value in
AI-generated feedback, and (iii) to analyze trends in student preferences for
feedback generated from self-crafted prompts versus prompts incorporating
prompt engineering techniques and principles of effective feedback. Results
indicate that student-generated prompts typically reflect only a subset of
foundational prompt engineering techniques. Despite this lack of
sophistication, such as incomplete descriptions of task context, AI responses
demonstrated contextual intuitiveness by accurately inferring context from the
overall content of the prompt. We also identified 12 distinct features that
students attribute the usefulness of AI-generated feedback, spanning four
broader themes: Evaluation, Content, Presentation, and Depth. Finally, results
show that students overwhelmingly prefer feedback generated from structured
prompts, particularly those combining prompt engineering techniques with
principles of effective feedback. Implications of these results such as
integrating the principles of effective feedback in design and delivery of
feedback through AI systems, and incorporating prompt engineering in
introductory physics courses are discussed.
What is the application?
Who is the user?
Who age?
Why use AI?
Study design
