Date
Publisher
arXiv
Workshop courses designed to foster creativity are gaining popularity.
However, even experienced faculty teams find it challenging to realize a
holistic evaluation that accommodates diverse perspectives. Adequate
deliberation is essential to integrate varied assessments, but faculty often
lack the time for such exchanges. Deriving an average score without discussion
undermines the purpose of a holistic evaluation. Therefore, this paper explores
the use of a Large Language Model (LLM) as a facilitator to integrate diverse
faculty assessments. Scenario-based experiments were conducted to determine if
the LLM could integrate diverse evaluations and explain the underlying
pedagogical theories to faculty. The results were noteworthy, showing that the
LLM can effectively facilitate faculty discussions. Additionally, the LLM
demonstrated the capability to create evaluation criteria by generalizing a
single scenario-based experiment, leveraging its already acquired pedagogical
domain knowledge.
What is the application?
Who is the user?
Who age?
Why use AI?
Study design
