Search and Filter

Submit a research study

Contribute to the repository:

Add a paper

Securing Educational Llms: A Generalised Taxonomy Of Attacks On Llms And Dread Risk Assessment

Authors
Farzana Zahid,
Anjalika Sewwandi,
Lee Brandon,
Vimal Kumar,
Roopak Sinha
Date
Publisher
arXiv
Due to perceptions of efficiency and significant productivity gains, various organisations, including in education, are adopting Large Language Models (LLMs) into their workflows. Educator-facing, learner-facing, and institution-facing LLMs, collectively, Educational Large Language Models (eLLMs), complement and enhance the effectiveness of teaching, learning, and academic operations. However, their integration into an educational setting raises significant cybersecurity concerns. A comprehensive landscape of contemporary attacks on LLMs and their impact on the educational environment is missing. This study presents a generalised taxonomy of fifty attacks on LLMs, which are categorized as attacks targeting either models or their infrastructure. The severity of these attacks is evaluated in the educational sector using the DREAD risk assessment framework. Our risk assessment indicates that token smuggling, adversarial prompts, direct injection, and multi-step jailbreak are critical attacks on eLLMs. The proposed taxonomy, its application in the educational environment, and our risk assessment will help academic and industrial practitioners to build resilient solutions that protect learners and institutions.
What is the application?
Who is the user?
Who age?
Why use AI?
Study design