Date
Publisher
arXiv
Generative AI (GenAI) tools are rapidly transforming higher education, yet
little is known about how students' GenAI literacy shapes their ability to
perform independently once such support is removed. This study investigates
what we term the agency gap, introduced as the extent to which GenAI literacy
predicts student writing performance in contexts that require self-initiation
and regulation. Seventy-nine medical and nursing students completed multimodal
academic writing tasks based on visual data, supported either by a reactive or
proactive GenAI chatbot, followed by a parallel task without AI support.
Writing was evaluated across insightfulness, visual data integration,
organisation, linguistic quality, and critical thinking. Results showed that
GenAI literacy predicted independent writing performance only in the reactive
condition, where students had to actively mobilise their own strategies.
Mediation analyses revealed no indirect effect via in-task performance,
indicating that GenAI literacy acts as a direct, task-general competence rather
than a proxy for domain knowledge or other literacies. By contrast, proactive
scaffolding equalised outcomes across literacy levels, reducing reliance on
learners' GenAI literacy. The agency gap highlights when GenAI literacy matters
most, with implications for designing equitable AI-supported learning
environments that both leverage and mitigate differences in students' GenAI
literacy.
What is the application?
Who is the user?
Who age?
Why use AI?
Study design
