Date
Publisher
arXiv
As Artificial Intelligence (AI), particularly Large Language Models (LLMs),
becomes increasingly embedded in education systems worldwide, ensuring their
ethical, legal, and contextually appropriate deployment has become a critical
policy concern. This paper offers a comparative analysis of AI-related
regulatory and ethical frameworks across key global regions, including the
European Union, United Kingdom, United States, China, and Gulf Cooperation
Council (GCC) countries. It maps how core trustworthiness principles, such as
transparency, fairness, accountability, data privacy, and human oversight are
embedded in regional legislation and AI governance structures. Special emphasis
is placed on the evolving landscape in the GCC, where countries are rapidly
advancing national AI strategies and education-sector innovation. To support
this development, the paper introduces a Compliance-Centered AI Governance
Framework tailored to the GCC context. This includes a tiered typology and
institutional checklist designed to help regulators, educators, and developers
align AI adoption with both international norms and local values. By
synthesizing global best practices with region-specific challenges, the paper
contributes practical guidance for building legally sound, ethically grounded,
and culturally sensitive AI systems in education. These insights are intended
to inform future regulatory harmonization and promote responsible AI
integration across diverse educational environments.
What is the application?
Who is the user?
Who age?
Why use AI?
Study design
