Date
Publisher
arXiv
As Large Language Models (LLMs) gain in popularity, it is important to
understand how novice programmers use them. We present a thematic analysis of
33 learners, aged 10-17, independently learning Python through 45
code-authoring tasks using Codex, an LLM-based code generator. We explore
several questions related to how learners used these code generators and
provide an analysis of the properties of the written prompts and the generated
code. Specifically, we explore (A) the context in which learners use Codex, (B)
what learners are asking from Codex, (C) properties of their prompts in terms
of relation to task description, language, and clarity, and prompt crafting
patterns, (D) the correctness, complexity, and accuracy of the AI-generated
code, and (E) how learners utilize AI-generated code in terms of placement,
verification, and manual modifications. Furthermore, our analysis reveals four
distinct coding approaches when writing code with an AI code generator: AI
Single Prompt, where learners prompted Codex once to generate the entire
solution to a task; AI Step-by-Step, where learners divided the problem into
parts and used Codex to generate each part; Hybrid, where learners wrote some
of the code themselves and used Codex to generate others; and Manual coding,
where learners wrote the code themselves. The AI Single Prompt approach
resulted in the highest correctness scores on code-authoring tasks, but the
lowest correctness scores on subsequent code-modification tasks during
training. Our results provide initial insight into how novice learners use AI
code generators and the challenges and opportunities associated with
integrating them into self-paced learning environments. We conclude with
various signs of over-reliance and self-regulation, as well as opportunities
for curriculum and tool development.
What is the application?
Who is the user?
Who age?
Why use AI?
Study design
