Date
Publisher
arXiv
Open-ended questions are a favored tool among instructors for assessing
student understanding and encouraging critical exploration of course material.
Providing feedback for such responses is a time-consuming task that can lead to
overwhelmed instructors and decreased feedback quality. Many instructors resort
to simpler question formats, like multiple-choice questions, which provide
immediate feedback but at the expense of personalized and insightful comments.
Here, we present a tool that uses large language models (LLMs), guided by
instructor-defined criteria, to automate responses to open-ended questions. Our
tool delivers rapid personalized feedback, enabling students to quickly test
their knowledge and identify areas for improvement. We provide open-source
reference implementations both as a web application and as a Jupyter Notebook
widget that can be used with instructional coding or math notebooks. With
instructor guidance, LLMs hold promise to enhance student learning outcomes and
elevate instructional methodologies.
What is the application?
Who is the user?
Why use AI?
Study design
