Search and Filter

Submit a research study

Contribute to the repository:

Add a paper

Descriptive – Product Development

Feed-O-Meter: Investigating Ai-Generated Mentee Personas As Interactive Agents For Scaffolding Design Feedback Practice

Effective feedback, including critique and evaluation, helps designers develop design concepts and refine their ideas, supporting informed decision-making throughout the iterative design process. However, in studio-based design courses, students often struggle to provide feedback due to a lack of confidence and fear of being judged, which limits their ability to develop essential feedback-giving skills.

Edumod-Llm: A Modular Approach For Designing Flexible And Transparent Educational Assistants

With the growing use of Large Language Model (LLM)-based Question-Answering (QA) systems in education, it is critical to evaluate their performance across individual pipeline components. In this work, we introduce {\model}, a modular function-calling LLM pipeline, and present a comprehensive evaluation along three key axes: function calling strategies, retrieval methods, and generative language models. Our framework enables fine-grained analysis by isolating and assessing each component.

Closing The Loop: An Instructor-In-The-Loop Ai Assistance System For Supporting Student Help-Seeking In Programming Education

Timely and high-quality feedback is essential for effective learning in programming courses; yet, providing such support at scale remains a challenge. While AI-based systems offer scalable and immediate help, their responses can occasionally be inaccurate or insufficient. Human instructors, in contrast, may bring more valuable expertise but are limited in time and availability. To address these limitations, we present a hybrid help framework that integrates AI-generated hints with an escalation mechanism, allowing students to request feedback from instructors when AI support falls short.

Understanding Student Interaction With Ai-Powered Next-Step Hints: Strategies And Challenges

Automated feedback generation plays a crucial role in enhancing personalized learning experiences in computer science education. Among different types of feedback, next-step hint feedback is particularly important, as it provides students with actionable steps to progress towards solving programming tasks. This study investigates how students interact with an AI-driven next-step hint system in an in-IDE learning environment. We gathered and analyzed a dataset from 34 students solving Kotlin tasks, containing detailed hint interaction logs.

Advisingwise: Supporting Academic Advising In Higher Education Settings Through A Human-In-The-Loop Multi-Agent Framework

Academic advising is critical to student success in higher education, yet high student-to-advisor ratios limit advisors' capacity to provide timely support, particularly during peak periods. Recent advances in Large Language Models (LLMs) present opportunities to enhance the advising process. We present AdvisingWise, a multi-agent system that automates time-consuming tasks, such as information retrieval and response drafting, while preserving human oversight.

Owlgorithm: Supporting Self-Regulated Learning In Competitive Programming Through Llm-Driven Reflection

We present Owlgorithm, an educational platform that supports Self-Regulated Learning (SRL) in competitive programming (CP) through AI-generated reflective questions. Leveraging GPT-4o, Owlgorithm produces context-aware, metacognitive prompts tailored to individual student submissions.

Objective Measurement Of AI Literacy: Development And Validation Of The Ai Competency Objective Scale (AIcos)

As Artificial Intelligence (AI) becomes more pervasive in various aspects of life, AI literacy is becoming a fundamental competency that enables individuals to move safely and competently in an AI-pervaded world. There is a growing need to measure this competency, e.g., to develop targeted educational interventions. Although several measurement tools already exist, many have limitations regarding subjective data collection methods, target group differentiation, validity, and integration of current developments such as Generative AI Literacy.

Transforming Higher Education With Ai-Powered Video Lectures

The integration of artificial intelligence (AI) into video lecture production has the potential to transform higher education by streamlining content creation and enhancing accessibility. This paper investigates a semi automated workflow that combines Google Gemini for script generation, Amazon Polly for voice synthesis, and Microsoft PowerPoint for video assembly. Unlike fully automated text to video platforms, this hybrid approach preserves pedagogical intent while ensuring script to slide synchronization, narrative coherence, and customization.

Boop: Write Right Code

Novice programmers frequently adopt a syntax-specific and test-case-driven approach, writing code first and adjusting until programs compile and test cases pass, rather than developing correct solutions through systematic reasoning. AI coding tools exacerbate this challenge by providing syntactically correct but conceptually flawed solutions. In this paper, we address the question of developing correctness-first methodologies to enhance computational thinking in introductory programming education.

Small Models, Big Support: A Local Llm Framework For Educator-Centric Content Creation And Assessment With Rag And Cag

While Large Language Models (LLMs) are increasingly applied in student-facing educational tools, their potential to directly support educators through locally deployable and customizable solutions remains underexplored. Many existing approaches rely on proprietary, cloud-based systems that raise significant cost, privacy, and control concerns for educational institutions. To address these barriers, we introduce an end-to-end, open-source framework that empowers educators using small (3B-7B parameter), locally deployable LLMs.