Artificial intelligence is already changing the way the world works. In many cases, it’s helping businesses make smarter decisions, streamline operations, and uncover insights at remarkable speed. But while AI is fast becoming essential infrastructure for some, it’s still an unknown and unreachable frontier for many others.
Across the UK, in places where technology could arguably make the biggest difference, AI remains absent. And it’s not because people don’t see the value or are afraid of change; it’s because the tools, the training, and the time simply aren’t available to them. For many small charities and local organisations, even the idea of using AI feels out of step with the reality they’re working in – stretched for resources, overwhelmed by demand, and still trying to get to grips with basic digital tools.
This disconnect is called ‘AI poverty’. It refers to the widening gap between those who can take advantage of AI, and those who are being left behind – not by choice, but by circumstance. And while the phrase may sound academic, its impact is very real. The organisations affected are often those that do a critical job with limited resources – whether that’s food banks, mental health services, or youth programmes.
These groups often operate with just a handful of people, relying on outdated systems, while juggling everything from frontline support to fundraising. They aren’t thinking about predictive analytics or automation; they’re just trying to stay above water. Yet, they’re the very organisations that could benefit most from the right technology, if it was accessible, relevant, and designed with them in mind.
It’s tempting to see this as an issue that sits outside the mainstream conversation about AI strategy or innovation. But the longer we ignore this gap, the more it will undermine the broader ecosystem. AI adoption can’t flourish if entire sectors of society are excluded. The technology may be intelligent, but its impact depends on how and where it is used.
What’s more, the stakes are growing. As AI becomes integrated into public services and third-sector delivery models, there’s a risk that essential services will start evolving without the participation of the communities that rely on them. If the rollout of AI-driven systems assumes a baseline of digital fluency or access, we risk designing processes that exclude the very people they’re meant to help.
We already see signs of this exclusion creeping in. For example, automated systems are being introduced in service delivery, but many overlook practical barriers like language differences or lack of access to digital devices. These gaps aren’t the result of bad intent; they’re the product of design processes that didn’t include everyone from the start.
Responsibility for addressing this doesn’t fall to one group; it cuts across sectors. Developers, policymakers, businesses, and community leaders all have a role to play. And while conversations around responsible AI often focus on bias, privacy, or algorithmic transparency – all of which are critical issues – we need to widen the lens. Responsibility also means ensuring people can engage with the tools in the first place.
For the private sector, this presents both a challenge and an opportunity. Companies already investing in AI have a chance to use their expertise and infrastructure to create shared value – not through charity, but through collaboration. By supporting skills-building outside their own walls, businesses can help grow a more informed, confident user base, while also reinforcing trust in AI more broadly.
The same goes for design. Too often, AI products are built for well-resourced teams with specialist knowledge. But if we want real-world adoption to increase, we need to meet people where they are. That means building tools that are intuitive, flexible, and able to support non-technical users. It means designing with the realities of small organisations in mind, and not just enterprise environments.
Most importantly, we need to shift the mindset that innovation happens only in labs, startups, or tech hubs. Some of the most powerful applications of AI could emerge from the grassroots – that is, if the people on the ground are given the chance to explore, test, and adapt these tools to the challenges they know best.
This isn’t about slowing down AI’s development; it’s about expanding its relevance. The promise of AI was never just speed or power; it was potential. The potential to solve meaningful problems, in more places, for more people and democratising knowledge in the process.
But for that promise to be realised, we must start with inclusion. Not as a checkbox or an afterthought, but as a core principle of how we define success.
AI will continue to evolve. The question is whether its benefits will be widely shared or narrowly concentrated. If we want AI to strengthen society, not divide it, then we need to bring more voices into the conversation. And we need to do it now.


