“Pilotitis” to Progress – How the NHS can scale AI


Meena Sebastian, Healthcare Lead at NEC Software Solutions discusses the practical use of AI in the NHS and how we need to move from “pilotitis” to real progress.

AI has become the NHS’s favourite conversation starter, and for good reason. From auto-triaging referrals to reducing diagnostic backlogs, the potential is clear. Yet anyone close to delivery knows that progress is still patchy, inconsistent and often confined to small-scale pilots.

We’re seeing both sides of this reality first-hand – NHS organisations involved in delivering diabetic eye screening services are actively exploring the use of AI-enabled grading to help clinicians make the “disease/no disease” decision faster and with greater accuracy. It has the potential to improve patient experiences, cut down on administrative strain, and enhance (rather than replace) clinical judgment. But like so many other promising projects, the initial rollouts are likely to be pilots.

It’s a common story: a landscape of innovation or potential innovation, but delivered at a local level/on a small scale.

Momentum is building, but so is complexity

Recent government announcements, including the creation of the new AI Commission and the large-scale trials of AI across NHS screening services, are welcome signs that the ambition to move beyond “pilotitis” is real. Structured frameworks and national focus can bring consistency to an area that has so far relied heavily on local champions and siloed projects.

But the danger is that while we race to deploy, we risk embedding existing inequities into the algorithms we’re building. AI models are only as unbiased as the data they’re trained on. And the NHS, for all its strengths, doesn’t yet have representative, clean datasets across every population demographic and condition.

If we want to scale safely, the Commission must go beyond setting guardrails and actively address the bias baked into healthcare data. That means ensuring diversity in training datasets, transparency in how algorithms are validated, and representation from across the health ecosystem, including minorities in leadership roles, the workforce and patient groups, in decision-making.

Otherwise, we’ll end up automating the inequalities we’ve spent decades trying to fix.

Ethical AI must be embedded, not bolted on

We also need to make sure that ethical AI isn’t a separate workstream. It’s the foundation. And user-centred design can make a real difference to this.

For example, the planned use of AI in the “disease/no-disease” grading system in diabetic eye screening services has been designed with the users’ needs in mind – to support clinicians rather than replacing their role in a safe and effective way. It can give them time back, reduce human error, and help bake-in evidence-led decision making.

Yet ethical use also relies on system readiness. Many NHS organisations simply don’t have the digital basics in place – interoperable systems, accurate data flows, or enough digital skills within their workforce – to capitalise on AI’s potential. Without this groundwork, the gap between enthusiastic early adopters and the rest will only widen.

Breaking the cycle of “pilotitis”

To move forward, we need to stop treating pilots as proof of concept and start treating them as stepping stones to scale. That requires clearer objectives, measurable outcomes and transparent evaluation, so success can be replicated elsewhere rather than starting from scratch each time.

It also means tackling the blame culture that pervades parts of the NHS. Too often, when an innovation doesn’t deliver immediately, it can be seen as a failure, and digital teams are deemed responsible. Instead, we should be asking what the data tells us. If something doesn’t work, it’s an insight, not an opportunity to point blame.

Data-driven learning should replace finger-pointing. Empowered staff, not risk-averse teams, will drive the next phase of digital transformation.

User-centred design

Equally, AI must integrate seamlessly into existing NHS workflows. Clinicians don’t have time to learn another platform or log into another system. The solutions that succeed will be the ones that slot quietly into the process, improving it rather than adding friction.

This is where partnerships and user-centred design matter. By co-designing with the NHS – to understand the patient journey, work within existing pathways, and evolve solutions based on real-world use. It’s what makes the difference between a clever tool and a sustainable service.

A call for practical optimism

The truth is that the NHS has never lacked ambition; it’s lacked bandwidth and manpower. Staff are tired, budgets are tight, and digital teams are under relentless pressure. Yet amid the noise, there are “anchors in the sea” – organisations and leaders quietly proving that AI can make healthcare safer, faster and more equitable in these somewhat choppy waters.

The diabetic eye screening programme is one example of many, but it shows what’s possible when AI is used to augment, not replace, clinical expertise.

The next chapter must be about turning pilots into practice – scaling what works, sharing data responsibly, and keeping ethics at the heart of every algorithm. The new AI Commission could be a powerful catalyst for that change if it listens to those on the ground and acts on what the data tells us, alongside what the workforce needs and vendor partners can do to help – not what policy dictates.

We don’t need a revolution; we need consistent, evidence-based evolution. Start small, prove value, and scale logically. That’s how AI will truly work for the NHS and for the millions of patients it serves every day.



Source link

The post “Pilotitis” to Progress – How the NHS can scale AI first appeared on TechToday.

This post originally appeared on TechToday.

Leave a Reply

Your email address will not be published. Required fields are marked *