Search and Filter

Submit a research study

Contribute to the repository:

Add a paper

Impact – Quasi–experimental

Enhancing Python Programming Education with an AI-Powered Code Helper: Design, Implementation, and Impact

This is the study that presents an AI-Python-based chatbot that helps students to learn programming by demonstrating solutions to such problems as debugging errors, solving syntax problems or converting abstract theoretical concepts to practical implementations. Traditional coding tools like Integrated Development Environments (IDEs) and static analyzers do not give robotic help while AI-driven code assistants such as GitHub Copilot focus on getting things done.

AI Knows Best? The Paradox of Expertise, AI-Reliance, and Performance in Educational Tutoring Decision-Making Tasks

We present an empirical study of how both experienced tutors and non-tutors judge the correctness of tutor praise responses under different Artificial Intelligence (AI)-assisted interfaces, types of explanation (textual explanations vs. inline highlighting). We first fine-tuned several Large Language Models (LLMs) to produce binary correctness labels and explanations, achieving up to 88% accuracy and 0.92 F1 score with GPT-4. We then let the GPT-4 models assist 95 participants in tutoring decision-making tasks by offering different types of explanations.

Socratic Mind: Impact Of A Novel Genai-Powered Assessment Tool On Student Learning And Higher-Order Thinking

This study examines the impact of Socratic Mind, a Generative Artificial Intelligence (GenAI) powered formative assessment tool that employs Socratic questioning to support student learning in a large, fully online undergraduate-level computing course. Employing a quasi-experimental, mixed-methods design, we investigated participants' engagement patterns, the influence of user experience on engagement, and impacts on both perceived and actual learning outcomes.

RoboBuddy in the Classroom: Exploring LLM-Powered Social Robots for Storytelling in Learning and Integration Activities

Creating and improvising scenarios for content approaching is an enriching technique in education. However, it comes with a significant increase in the time spent on its planning, which intensifies when using complex technologies, such as social robots. Furthermore, addressing multicultural integration is commonly embedded in regular activities due to the already tight curriculum. Addressing these issues with a single solution, we implemented an intuitive interface that allows teachers to create scenario-based activities from their regular curriculum using LLMs and social robots.

Students' Perceptions to a Large Language Model's Generated Feedback and Scores of Argumentation Essays

Students in introductory physics courses often rely on ineffective strategies, focusing on final answers rather than understanding underlying principles. Integrating scientific argumentation into problem-solving fosters critical thinking and links conceptual knowledge with practical application. By facilitating learners to articulate their scientific arguments for solving problems, and by providing real-time feedback on students' strategies, we aim to enable students to develop superior problem-solving skills.

PAPPL: Personalized AI-Powered Progressive Learning Platform

Engineering education has historically been constrained by rigid, standardized frameworks, often neglecting students' diverse learning needs and interests. While significant advancements have been made in online and personalized education within K-12 and foundational sciences, engineering education at both undergraduate and graduate levels continues to lag in adopting similar innovations. Traditional evaluation methods, such as exams and homework assignments, frequently overlook individual student requirements, impeding personalized educational experiences.

Reflective Homework as a Learning Tool: Evidence from Comparing Thirteen Years of Dual vs. Single Submission

Dual-submission homework, where students submit work, receive feedback and then revise has gained attention as a way to foster reflection and discourage reliance on online answer repositories. This study analyzes 13 years of exam data from a computer architecture course to compare student performance under single versus dual-submission homework conditions. Using pooled t-tests on matched exam questions, we found that dual-submission significantly improved outcomes in a majority of cases.

Assessing the Quality of AI-Generated Exams: A Large-Scale Field Study

While large language models (LLMs) challenge conventional methods of teaching and learning, they present an exciting opportunity to improve efficiency and scale high-quality instruction. One promising application is the generation of customized exams, tailored to specific course content. There has been significant recent excitement on automatically generating questions using artificial intelligence, but also comparatively little work evaluating the psychometric quality of these items in real-world educational settings.

From Cognitive Relief to Affective Engagement: An Empirical Comparison of AI Chatbots and Instructional Scaffolding in Physics Education

Providing effective, personalized support is critical for helping students overcome conceptual difficulties in physics. However, established scaffolding methods, such as structured tiered support, are often too resource-intensive for widespread implementation. Therefore, this study, investigates whether an easily adaptable, custom-configured AI chatbot can offer comparable affective benefits and cognitive relief. We conducted a quasi-experimental field study with 273 ninth-grade students in Germany.

Learning by Teaching: Engaging Students as Instructors of Large Language Models in Computer Science Education

While Large Language Models (LLMs) are often used as virtual tutors in computer science (CS) education, this approach can foster passive learning and over-reliance. This paper presents a novel pedagogical paradigm that inverts this model: students act as instructors who must teach an LLM to solve problems. To facilitate this, we developed strategies for designing questions with engineered knowledge gaps that only a student can bridge, and we introduce Socrates, a system for deploying this method with minimal overhead.