Date
Publisher
arXiv
The rise of generative AI tools like ChatGPT has significantly reshaped
education, sparking debates about their impact on learning outcomes and
academic integrity. While prior research highlights opportunities and risks,
there remains a lack of quantitative analysis of student behavior when
completing assignments. Understanding how these tools influence real-world
academic practices, particularly assignment preparation, is a pressing and
timely research priority.
This study addresses this gap by analyzing survey responses from 388
university students, primarily from Russia, including a subset of international
participants. Using the XGBoost algorithm, we modeled predictors of ChatGPT
usage in academic assignments. Key predictive factors included learning habits,
subject preferences, and student attitudes toward AI. Our binary classifier
demonstrated strong predictive performance, achieving 80.1\% test accuracy,
with 80.2\% sensitivity and 79.9\% specificity. The multiclass classifier
achieved 64.5\% test accuracy, 64.6\% weighted precision, and 64.5\% recall,
with similar training scores, indicating potential data scarcity challenges.
The study reveals that frequent use of ChatGPT for learning new concepts
correlates with potential overreliance, raising concerns about long-term
academic independence. These findings suggest that while generative AI can
enhance access to knowledge, unchecked reliance may erode critical thinking and
originality. We propose discipline-specific guidelines and reimagined
assessment strategies to balance innovation with academic rigor. These insights
can guide educators and policymakers in ethically and effectively integrating
AI into education.
What is the application?
Who age?
Why use AI?
Study design
