Date
Publisher
arXiv
The increasing availability of large language models (LLMs) has raised
concerns about their potential misuse in online learning. While tools for
detecting LLM-generated text exist and are widely used by researchers and
educators, their reliability varies. Few studies have compared the accuracy of
detection methods, defined criteria to identify content generated by LLM, or
evaluated the effect on learner performance from LLM misuse within learning. In
this study, we define LLM-generated text within open responses as those
produced by any LLM without paraphrasing or refinement, as evaluated by human
coders. We then fine-tune GPT-4o to detect LLM-generated responses and assess
the impact on learning from LLM misuse. We find that our fine-tuned LLM
outperforms the existing AI detection tool GPTZero, achieving an accuracy of
80% and an F1 score of 0.78, compared to GPTZero's accuracy of 70% and macro F1
score of 0.50, demonstrating superior performance in detecting LLM-generated
responses. We also find that learners suspected of LLM misuse in the open
response question were more than twice as likely to correctly answer the
corresponding posttest MCQ, suggesting potential misuse across both question
types and indicating a bypass of the learning process. We pave the way for
future work by demonstrating a structured, code-based approach to improve
LLM-generated response detection and propose using auxiliary statistical
indicators such as unusually high assessment scores on related tasks,
readability scores, and response duration. In support of open science, we
contribute data and code to support the fine-tuning of similar models for
similar use cases.
What is the application?
Who is the user?
Who age?
Why use AI?
Study design
