Search and Filter

Submit a research study

Contribute to the repository:

Add a paper

Organizing

Comparative Analysis of STEM and non-STEM Teachers' Needs for Integrating AI into Educational Environments

There is an increasing imperative to integrate programming platforms within AI frameworks to enhance educational tasks for both teachers and students. However, commonly used platforms such as Code.org, Scratch, and Snap fall short of providing the desired AI features and lack adaptability for interdisciplinary applications. This study explores how educational platforms can be improved by incorporating AI and analytics features to create more effective learning environments across various subjects and domains.

Emerging Patterns of GenAI Use in K-12 Science and Mathematics Education

In this report, we share findings from a nationally representative survey of US public school math and science teachers, examining current generative AI (GenAI) use, perceptions, constraints, and institutional support. We show trends in math and science teacher adoption of GenAI, including frequency and purpose of use. We describe how teachers use GenAI with students and their beliefs about GenAI's impact on student learning.

Exploring Generative Artificial Intelligence (GenAI) and AI Agents in Research and Teaching - Concepts and Practical Cases.

This study provides a comprehensive analysis of the development, functioning, and application of generative artificial intelligence (GenAI) and large language models (LLMs), with an emphasis on their implications for research and education. It traces the conceptual evolution from artificial intelligence (AI) through machine learning (ML) and deep learning (DL) to transformer architectures, which constitute the foundation of contemporary generative systems.

Sociotechnical Imaginaries of ChatGPT in Higher Education: The Evolving Media Discourse

This study investigates how U.S. news media framed the use of ChatGPT in higher education from November 2022 to October 2024. Employing Framing Theory and combining temporal and sentiment analysis of 198 news articles, we trace the evolving narratives surrounding generative AI. We found that the media discourse largely centered on institutional responses; policy changes and teaching practices showed the most consistent presence and positive sentiment over time.

Securing Educational LLMs: A Generalised Taxonomy of Attacks on LLMs and DREAD Risk Assessment

Due to perceptions of efficiency and significant productivity gains, various organisations, including in education, are adopting Large Language Models (LLMs) into their workflows. Educator-facing, learner-facing, and institution-facing LLMs, collectively, Educational Large Language Models (eLLMs), complement and enhance the effectiveness of teaching, learning, and academic operations. However, their integration into an educational setting raises significant cybersecurity concerns.

GOLDMIND: A Teacher-Centered Knowledge Management System For Higher Education AI Lessons From Iterative Design

Designing Knowledge Management Systems (KMSs) for higher education requires addressing complex human-technology interactions, especially where staff turnover and changing roles create ongoing challenges for reusing knowledge. While advances in process mining and Generative AI enable new ways of designing features to support knowledge management, existing KMSs often overlook the realities of educators' workflows, leading to low adoption and limited impact. This paper presents findings from a two-year human-centred design study with 108 higher education teachers, focused on the iterative co-desi

Decoding Instructional Dialogue: Human-AI Collaborative Analysis of Teacher Use of AI Tool at Scale

The integration of large language models (LLMs) into educational tools has the potential to substantially impact how teachers plan instruction, support diverse learners, and engage in professional reflection. Yet little is known about how educators actually use these tools in practice and how their interactions with AI can be meaningfully studied at scale. This paper presents a human-AI collaborative methodology for large-scale qualitative analysis of over 140,000 educator-AI messages drawn from a generative AI platform used by K-12 teachers.

When the prompting stops: exploring teachers' work around the educational frailties of generative AI tools

Teachers are now encouraged to use generative artificial intelligence (GenAI) tools to complete various school-related administrative tasks, with the promise of saving considerable amounts of time and effort. Drawing on interviews from 57 teachers across eight schools in Sweden and Australia, this paper explores teachers experiences when working with GenAI. In particular, it focuses on the large amounts of work that teachers put into reviewing, repairing and sometimes completely reworking AI-produced outputs that they perceive to be deficient.

AI + LEARNING DIFFERENCES: Designing a Future with No Boundaries

The rapid expansion of artificial intelligence (AI) presents an unprecedented opportunity to address learning diferences when designing innovative systems. In December 2024, the Stanford Accelerator for Learning convened the AI + Learning Diferences Working Symposium and AI + Learning Diferences Hackathon, bringing community members together to explore how AI systems can expand learning opportunities for all. This white paper synthesizes contributions into nine interconnected sections, each examining a critical dimension at the intersection of AI and learning diferences:

Mitigating Trojanized Prompt Chains In Educational Llm Use Cases: Experimental Findings and Detection Tool Design

The integration of Large Language Models (LLMs) in K--12 education offers both transformative opportunities and emerging risks. This study explores how students may Trojanize prompts to elicit unsafe or unintended outputs from LLMs, bypassing established content moderation systems with safety guardrils. Through a systematic experiment involving simulated K--12 queries and multi-turn dialogues, we expose key vulnerabilities in GPT-3.5 and GPT-4.