Artificial Intelligence and Generative AI are creating both new opportunities and new risks for learners with Special Educational Needs and Disabilities. Here, Nadia Cooke shares a five-part framework to help SEND teachers get the most out of these new developments.
Artificial Intelligence (AI) and Generative AI (GenAI) are reshaping education, creating new opportunities and new risks for learners with Special Educational Needs and Disabilities (SEND). Used well, these tools can reduce barriers, personalise learning, and offer safe, low-pressure spaces to practise communication or social skills. Used poorly, they can reinforce stereotypes, limit agency, or expose learners to biased or misleading content.
This article presents a five-part framework for developing AI literacy in SEND education. It also highlights how, in the process of supporting learners, teachers deepen their own understanding of inclusion, strengthen safeguarding practices, and build confidence in navigating AI responsibly. Teaching AI literacy is part of teachers’ responsibility - especially with both young and adult learners with SEND, and doing so helps us safeguard not only our students, but ourselves.
Why AI Literacy Matters in SEND Education
AI literacy is not simply the ability to use new tools. It involves:
• Understanding how AI works
• Recognising risk and bias
• Developing strategies to support diverse learners
• Teaching students to evaluate and question AI outputs
• Protecting learners from over-reliance, misinformation, or ideological influence
And importantly, AI literacy teaches us about ourselves - our assumptions, biases, areas for growth, and the ways we interpret information. When teachers learn how to guide AI use for learners with SEND, they simultaneously strengthen their own critical awareness and safeguarding practices.
Developing AI literacy for learners with SEND is also about what we learn in the process:
• How we respond to uncertainty
• Where our own biases lie
• How we judge credibility
• How we protect learners from harm
• How we co-construct independence rather than impose it
By guiding all learners, we sharpen our own critical thinking, ethical awareness, and professional responsibilities. AI literacy becomes a shared journey, not a top-down practice.
A Five-Part Framework
1. Strengthening Teacher Literacy
Teachers need confidence in their own AI literacy. One helpful analogy is to think of AI as an intern: capable of producing useful drafts or suggestions, but always requiring supervision, checking, and professional judgement. Developing this mindset supports safeguarding and prevents over-reliance on automated outputs.
Avoid
A learner submitting an AI-generated paragraph without editing because ‘the AI wrote it, so it must be right.’
Do
• Encourage learners to check AI outputs against class materials, trusted sources, and task criteria
• Encourage learners to question AI responses (For example: Does this fully answer the question? What is missing? What assumptions does it make?)
• Use simple checklists such as Generate → Check → Improve to structure critical engagement
• Practise prompts that generate varied outputs to demonstrate flexibility and interpretation
For example, learners might compare AI responses to the following prompts on the same topic:
• ‘Explain why school rules are important.’
• ‘What are the benefits and challenges of school rules from different perspectives?’
• ‘How might different students experience school rules differently?’
Comparing these outputs helps learners see how changes in wording influence emphasis, tone, and viewpoint, and reinforces the idea that prompts shape responses rather than reveal fixed truths.
Through these approaches, learners develop metacognition and critical digital literacy, while teachers strengthen their safeguarding instincts by recognising where AI may oversimplify, generalise, or unintentionally reinforce narrow viewpoints.
2. Avoiding Paternalism: Involve Learners, Don’t Decide on their behalf
How might you involve learners with SEND in decisions about AI use in your own classroom?
Paternalism occurs when decisions are made for learners rather than with them, often with the intention of helping or protecting. In AI-supported educational contexts, paternalism may arise when well-meaning adults make unilateral choices about how technologies are used, without consulting the learners they affect. This can occur when:
• Teachers use generative AI to ‘simplify’ materials without discussing learners’ preferences or needs
• AI tools are introduced or configured in ways that limit learner choice, such as through default settings, automated adaptations, or prescribed uses
• Adaptations unintentionally reinforce stereotypes about disability rather than supporting individual strengths and autonomy
Importantly, AI tools themselves do not override learner choice; rather, choice is constrained by how tools are selected, framed, and implemented by educators or institutions. When learners are not involved in these decisions, support risks becoming something done to learners rather than designed with them.
The principle ‘Nothing About Us Without Us’, rooted in the disability rights movement and enshrined in rights-based approaches such as the UN Convention on the Rights of Persons with Disabilities, is therefore essential. It asserts that disabled people must be actively involved in decisions that affect their lives, including educational and technological practices.
Applied to AI literacy, this principle means that learners with SEND and where appropriate, their families and communities, should help shape how AI tools are used in learning. Doing so protects autonomy, promotes agency, and ensures that AI supports independence rather than unintentionally reducing it.
Avoid
Creating AI-simplified worksheets for a dyslexic learner without asking what they want or whether they need them.
Do
• Involve learners in decisions about AI
• Be transparent about what AI generates
• Check for bias
• Encourage reflection: “When does AI support your independence? When does it reduce it?”
3. Resisting Ideological Manipulation: Build Critical Literacy for All Learners
What strategies do you use to help learners evaluate online information, and how might these extend to AI-generated content?
AI systems are not neutral. Algorithms are often designed to prioritise engagement over accuracy, meaning they tend to show users content that keeps their attention rather than content that is balanced or critically informative. This can lead to persuasive loops, where learners are repeatedly shown similar types of information that reinforce existing interests or viewpoints, making it harder to encounter alternative perspectives.
Related to this are echo chambers, where exposure to a narrow range of ideas creates the impression that these views are dominant or uncontested, simply because other perspectives are filtered out. Over time, this can shape beliefs and understanding without learners being aware that this filtering is taking place.
These dynamics affect all learners, not any single neurotype, but may be particularly significant in educational contexts where developing critical thinking and digital literacy is a key aim.
AI literacy must help learners recognise:
• One-sided or biased outputs
• Claims needing verification
• When they may be entering a persuasive loop
• How prompt phrasing shapes responses
Avoid
Asking AI loaded questions that assume one viewpoint is correct, then presenting the output as fact.
For example, a teacher asks an AI tool, ‘Why is inclusion lowering academic standards in mainstream classrooms?’ and then shares the generated response with learners. Because the question already assumes inclusion is harmful, the AI is likely to produce an answer that reinforces this framing rather than offering a balanced or evidence-informed perspective.
A more ethical approach would be to ask, ‘What are the different perspectives on inclusion in mainstream classrooms, and what does research suggest about its impact on learning?’ and to discuss the output critically with learners.
Do
• Teach learners to reframe prompts (e.g. ‘What are the pros and cons of…?’)
• Model how to compare AI responses with trusted sources
• Explain that AI reflects patterns in data rather than objective truth
• Use role-play activities to compare biased versus balanced prompting
For example, in a role-play activity, one learner (or group) takes the role of the ‘prompt designer’ and deliberately asks a leading or narrowly framed question, while another uses a neutral or open-ended prompt on the same topic. The class then compares the AI-generated responses, discussing how differences in wording influence tone, emphasis, and conclusions. This supports learners in recognising bias, questioning outputs, and understanding their own agency when interacting with AI tools.
This builds resilience, critical thinking, and protects learners from ideological persuasion - an essential safeguarding skill.
4. Social and Emotional Dimensions: Support Confidence Without Replacing Human Connection
How can you balance AI-supported rehearsal with authentic peer interaction?
AI can support learners with SEND socially and emotionally:
• Practising conversations
• Rehearsing presentations
• Supporting shutdown or communication anxiety
However, risks arise when learners begin to:
• View AI as a companion
• Withdraw from peer interaction
• Compare themselves negatively to AI-polished work
Teachers must frame AI as a tool, not a relationship or measure of worth.
Avoid
A learner forming an emotional attachment to a chatbot and avoiding real social practice.
Do
• Set clear boundaries between AI and human relationships. Clear boundaries between AI and human relationships are essential to safeguarding. Learners should understand that AI tools are supports for learning and practice, not substitutes for friendship, emotional connection, or adult guidance. Making this explicit helps prevent over-reliance on AI, supports healthy social development, and reinforces the value of human interaction within learning environments.
• Scaffold transitions from AI use to pair work and group work. Scaffolding transitions from AI-supported tasks to peer or group work allows learners to practise skills privately before applying them socially. Labelling activities as ‘AI-supported rehearsal’ reinforces the idea that AI is a temporary aid rather than an endpoint, helping learners recognise when and how to move beyond digital support.
• Label tasks explicitly as ‘AI-supported rehearsal’
• Use reflection journals to help learners compare AI-supported practice with peer interaction
• Incorporate AAC (Augmentative and Alternative Communication) tools where appropriate. These are tools and strategies that support communication for individuals who experience difficulties with spoken or written language. These may include symbol-based systems, speech-generating devices, visual supports, or text-based communication tools. When thoughtfully integrated alongside AI, AAC can increase access, participation, and confidence, particularly during transitions from supported to independent or social communication.
Supporting emotional resilience underpins all these practices and is central to safeguarding for learners and for educators making decisions about AI use. Emotional resilience enables learners to manage uncertainty, frustration, and social challenges, while helping teachers reflect critically on when AI support is helpful and when human connection is essential. Clear boundaries, supported communication, and reflective practice together create a learning environment where AI enhances inclusion without undermining wellbeing or autonomy.
5. Supporting Literal Thinkers
Literal thinkers, including many autistic learners, often value clarity, consistency, and explicit structure. AI tools can feel reassuring because they provide step-by-step responses, clear instructions, and predictable formats, which may reduce anxiety and support understanding.
However, a preference for literal interpretation can affect how AI outputs are understood and used. Learners may find it more challenging to interpret:
• imperfect or partially incorrect AI responses
• over-generalised statements presented confidently
• contradictions between different AI outputs
• conclusions that lack visible reasoning or evidence
In AI contexts, literal thinkers may also assume that the wording of a prompt must be precise and that the response produced is therefore authoritative. This can influence both how prompts are written and how outputs are trusted. For example, learners may take an AI response at face value if it sounds confident or aligns closely with the wording of the question, even when the information is incomplete or misleading.
This does not mean that literal thinkers are inherently more vulnerable to ideological manipulation. Rather, ideological influence can occur when any learner, regardless of neurotype, is not supported to question sources, recognise framing, or explore alternative perspectives. For literal thinkers, explicit scaffolding is particularly important to help them recognise that AI outputs reflect patterns in data and prompt design, not objective truth or neutral authority.
AI Literacy as Inclusive, Ethical Practice
AI can transform learning for students with SEND, but only if used with care, collaboration, and critical awareness.
This five-part framework offers teachers practical tools to:
• Avoid paternalism
• Resist ideological influence
• Support social and emotional well-being
• Strengthen teacher literacy and support literal thinkers
By embedding these principles, we can ensure AI removes barriers rather than reinforcing them. AI literacy becomes a pathway to empowerment, for learners and for teachers - supporting confidence, autonomy and critical awareness across the classroom.
Developing AI literacy is part of safeguarding, part of inclusion, and part of shaping a future where all learners with SEND can thrive.