Search and Filter

Submit a research study

Contribute to the repository:

Add a paper

Other

Young Children's Anthropomorphism Of An Ai Chatbot: Brain Activation And The Role Of Parent Co-Presence

Artificial Intelligence (AI) chatbots powered by a large language model (LLM) are entering young children's learning and play, yet little is known about how young children construe these agents or how such construals relate to engagement. We examined anthropomorphism of a social AI chatbot during collaborative storytelling and asked how children's attributions related to their behavior and prefrontal activation. Children at ages 5-6 (N = 23) completed three storytelling sessions: interacting with (1) an AI chatbot only, (2) a parent only, and (3) the AI and a parent together.

Understanding The Impacts Of Generative Ai Use On Children

Recent advances in generative artificial intelligence (AI) are transforming how children interact with technology, particularly in education and creative domains. A growing body of research has explored the impacts of generative AI on users, highlighting both its potential benefits and associated risks. Much of the existing literature has focussed on adults and teens, leaving significant gaps in our understanding of how younger children, aged 8 – 12, engage with and are affected by these technologies.

Ai-Driven Predictive Models For Optimizing Mathematics Education Technology: Enhancing Decision-Making Through Educational Data Mining And Meta-Analysis

This paper explores the challenge of achieving consistent effectiveness in integrating Mathematics Education Technology (MET) in K-12 classrooms, focusing on factors such as technology type, timing, and instructional strategies. It highlights the difficulties novice teachers face in optimizing MET compared to experienced educators, emphasizing the need to better understand the ideal duration and application of MET in various teaching settings. This study proposes using Artificial Intelligence (AI) to predict and optimize MET effectiveness, aiming to enhance student achievement.

Edumod-Llm: A Modular Approach For Designing Flexible And Transparent Educational Assistants

With the growing use of Large Language Model (LLM)-based Question-Answering (QA) systems in education, it is critical to evaluate their performance across individual pipeline components. In this work, we introduce {\model}, a modular function-calling LLM pipeline, and present a comprehensive evaluation along three key axes: function calling strategies, retrieval methods, and generative language models. Our framework enables fine-grained analysis by isolating and assessing each component.

Simulating Students With Large Language Models: A Review Of Architecture, Mechanisms, And Role Modelling In Education With Generative Ai

Simulated Students offer a valuable methodological framework for evaluating pedagogical approaches and modelling diverse learner profiles, tasks which are otherwise challenging to undertake systematically in real-world settings. Recent research has increasingly focused on developing such simulated agents to capture a range of learning styles, cognitive development pathways, and social behaviours. Among contemporary simulation techniques, the integration of large language models (LLMs) into educational research has emerged as a particularly versatile and scalable paradigm.

Artificial Intelligence Competence Of K-12 Students Shapes Their Ai Risk Perception: A Co-Occurrence Network Analysis

As artificial intelligence (AI) becomes increasingly integrated into education, understanding how students perceive its risks is essential for supporting responsible and effective adoption. This research aimed to examine the relationships between perceived AI competence and risks among Finnish K-12 upper secondary students (n = 163) by utilizing a co-occurrence analysis. Students reported their self-perceived AI competence and concerns related to AI across systemic, institutional, and personal domains.

Towards Synergistic Teacher-Ai Interactions With Generative Artificial Intelligence

Generative artificial intelligence (GenAI) is increasingly used in education, posing significant challenges for teachers adapting to these changes. GenAI offers unprecedented opportunities for accessibility, scalability and productivity in educational tasks. However, the automation of teaching tasks through GenAI raises concerns about reduced teacher agency, potential cognitive atrophy, and the broader deprofessionalisation of teaching.

Advisingwise: Supporting Academic Advising In Higher Education Settings Through A Human-In-The-Loop Multi-Agent Framework

Academic advising is critical to student success in higher education, yet high student-to-advisor ratios limit advisors' capacity to provide timely support, particularly during peak periods. Recent advances in Large Language Models (LLMs) present opportunities to enhance the advising process. We present AdvisingWise, a multi-agent system that automates time-consuming tasks, such as information retrieval and response drafting, while preserving human oversight.

Ai-Enabled Grading With Near-Domain Data For Scaling Feedback With Human-Level Accuracy

Constructed-response questions are crucial to encourage generative processing and test a learner's understanding of core concepts. However, the limited availability of instructor time, large class sizes, and other resource constraints pose significant challenges in providing timely and detailed evaluation, which is crucial for a holistic educational experience. In addition, providing timely and frequent assessments is challenging since manual grading is labor intensive, and automated grading is complex to generalize to every possible response scenario.

Consistently Simulating Human Personas With Multi-Turn Reinforcement Learning

Large Language Models (LLMs) are increasingly used to simulate human users in interactive settings such as therapy, education, and social role-play. While these simulations enable scalable training and evaluation of AI agents, off-the-shelf LLMs often drift from their assigned personas, contradict earlier statements, or abandon role-appropriate behavior. We introduce a unified framework for evaluating and improving persona consistency in LLM-generated dialogue.