By Mariano Felice, Research Lead AI, British Council

12 March 2026 - 16:30

A group of young school children looking at a computer.
How do we make sure that, as technology becomes more powerful, education remains deeply human?  ©

Getty

Artificial intelligence is now a part of everyday learning. Mariano Felice looks at the pros and cons of AI and how it can be used as a tool to support teachers and learners in an automated world.

When artificial intelligence first entered classrooms, it was greeted with a mixture of excitement and unease. Headlines swung between bold promises of personalised learning for every student, and warnings that machines would replace teachers or undermine education altogether. For many parents, teachers and learners, it felt as though something fundamental was changing – but without a clear sense of where it might lead.

A few years on, the conversation is becoming more grounded. AI is no longer a distant future or a passing trend; it is already woven into everyday learning, from language practice apps to tools that help teachers plan lessons or give feedback. The real challenge now is not whether to use AI, but how to do so responsibly, fairly and in ways that genuinely improve learning for everyone.

At its heart, this is a human story. It is about learners trying to build confidence and skills, teachers trying to support them in a fast-changing world, and leaders trying to make decisions that balance innovation with care. And it raises a central question: how do we make sure that, as technology becomes more powerful, education remains deeply human?

A moment of change

Education has always adapted to new tools. From textbooks to calculators to the internet, each innovation has prompted debate about what students should learn and how. AI feels different partly because of its speed and scale. Tools powered by large language models can generate text, hold conversations and provide instant feedback in ways that were unthinkable just a decade ago.

In language learning, especially, uptake has been rapid. Learners are using AI to practise conversations, explore new vocabulary or draft pieces of writing. Teachers are experimenting with AI to save time on routine tasks, create materials or support assessment. For many, these tools offer a welcome sense of possibility.

But this moment of change has also surfaced real concerns. If a student can ask AI to write an essay, how do we know what they can really do? If AI models are trained mainly on certain kinds of language, whose voices are being amplified, whose are being ignored? And if access to good technology is uneven, who risks being left behind?

These questions do not have simple answers. But they do point to the need for a clear set of values to guide how AI is used in learning.

The promise of more personalised learning

One reason AI has attracted so much attention is its potential to make learning more personal. In traditional classrooms, it can be difficult to tailor activities to every learner’s needs. AI systems, by contrast, can adapt tasks and feedback based on individual progress, offering extra practice where it is needed or moving faster when a learner is ready.

For someone learning a new language, this can be particularly powerful. AI tools can act as patient conversation partners, allowing learners to practise speaking without fear of embarrassment. They can highlight patterns in mistakes, suggest alternative expressions or help learners notice how language works. For many, this kind of low-pressure practice can build confidence and motivation.

AI can also help learners to become more independent. By helping them set goals, monitor their progress and reflect on their learning strategies, technology can encourage habits that last beyond a single course or classroom. In this sense, AI has the potential to support lifelong learning rather than replace human teaching.

Crucially, these benefits are strongest when AI is used to complement, not substitute, human interaction. Teachers remain essential as guides, mentors and motivators – roles that depend on empathy, cultural understanding and professional judgement.

The risks we cannot ignore

Alongside these opportunities sit serious risks. One of the most discussed is bias. AI systems learn from existing data, and that data reflects social and cultural inequalities. In language learning, this can mean that certain accents, dialects or ways of expressing ideas are treated as more ‘correct’ than others.

For learners who use non-standard varieties of English, or who are still developing fluency, this can have real consequences. An AI tool might misinterpret their writing as low quality or even machine-generated, reinforcing unfair judgements. Left unchecked, this risks narrowing our understanding of language and discouraging learners whose voices matter.

There are also concerns about privacy and surveillance. Many AI tools rely on collecting large amounts of user data. For schools and families, questions about who controls that data, how it is stored and how it might be used in the future are increasingly urgent.

Perhaps most worrying is the potential for AI to widen existing inequalities. If high-quality tools and training are available only to well-resourced schools or learners, the gap between those who benefit from AI and those who do not could grow. In this sense, AI is not automatically inclusive; inclusion must be designed and defended.

Putting people at the centre

Responding to these challenges requires more than technical fixes. It calls for a human-centred approach that puts learners’ wellbeing and agency first. This means treating AI as a tool to support human goals, not as an authority that defines them.

In practice, a human-centred approach encourages critical engagement with AI. Teachers and learners are invited to question AI outputs, discuss their limitations and use them as starting points for deeper thinking rather than final answers. This kind of engagement helps build digital literacy and prepares learners to navigate a world where AI-generated content is increasingly common.

It also means being clear about responsibility. AI systems can produce convincing responses without explaining how they arrived at them. When something goes wrong – when information is inaccurate, biased or harmful – humans must remain accountable. Transparency, oversight and clear ethical guidelines are therefore essential.

Supporting teachers through uncertainty

Teachers play a central role in shaping how AI is experienced in the classroom. Many are already experimenting with these tools, often on their own initiative. At the same time, many feel underprepared for the ethical and pedagogical questions AI raises.

This gap matters. Without support, teachers may feel pressure to either ban AI outright or embrace it uncritically. Neither response serves learners well. What teachers need is time, training and trust: opportunities to explore how AI fits with their subject, their students and their values.

Professional development that focuses on critical evaluation – not just how to use a tool, but when and why to use it – can make a real difference. Clear, shared policies on acceptable use and academic integrity can also reduce anxiety and create a sense of collective responsibility rather than individual risk.

Encouragingly, many teachers see AI not as a threat to their profession, but as a chance to refocus on what makes teaching human: building relationships, supporting collaboration and helping learners make sense of complex ideas.

Asking better questions about technology

One helpful way to approach AI in education is to slow down and ask better questions. Before adopting a new tool, schools and institutions can ask:

• Why are we using AI? What problem are we trying to solve, and what outcomes matter most for learners?

• How will we use it responsibly? What safeguards are in place around ethics, inclusion and data?

• What tools make sense once those principles are clear?

Starting with purpose rather than products helps ensure that technology serves educational values rather than driving them.

Leading change with care

AI will continue to evolve, and education will continue to adapt. The choices made now – about access, training, ethics and accountability – will shape how this technology affects learners for years to come.

The challenge is not to choose between innovation and tradition, but to bring them into the conversation. By leading with curiosity, care and a commitment to inclusion, educators and leaders can help ensure that AI supports learning in ways that are fair, meaningful and deeply human.

In the end, the most important question is not what AI can do, but what kind of education we want to create. If we keep people at the centre, technology can become a powerful ally rather than a distraction – one that helps learners find their voices, not lose them.

Find out more about the Future of English.

You might also be interested in: