Date
Publisher
arXiv
Context: As generative AI (GenAI) tools such as ChatGPT and GitHub Copilot
become pervasive in education, concerns are rising about students using them to
complete rather than learn from coursework-risking overreliance, reduced
critical thinking, and long-term skill deficits.
Objective: This paper proposes and empirically applies a causal model to help
educators scaffold responsible GenAI use in Software Engineering (SE)
education. The model identifies how professor actions, student factors, and
GenAI tool characteristics influence students' usage of GenAI tools.
Method: Using a design-based research approach, we applied the model in two
contexts: (1) revising four extensive lab assignments of a final-year Software
Testing course at Queen's University Belfast (QUB), and (2) embedding
GenAI-related competencies into the curriculum of a newly developed SE BSc
program at Azerbaijan Technical University (AzTU). Interventions included GenAI
usage declarations, output validation tasks, peer-review of AI artifacts, and
career-relevant messaging.
Results: In the course-level case, instructor observations and student
artifacts indicated increased critical engagement with GenAI, reduced passive
reliance, and improved awareness of validation practices. In the
curriculum-level case, the model guided integration of GenAI learning outcomes
across multiple modules and levels, enabling longitudinal scaffolding of AI
literacy.
Conclusion: The causal model served as both a design scaffold and a
reflection tool. It helped align GenAI-related pedagogy with SE education goals
and can offer a useful framework for instructors and curriculum designers
navigating the challenges of GenAI-era education.
What is the application?
Who is the user?
Who age?
Why use AI?
