AI adoption is about leadership, not just tech
6 May 2026
In this blog, Dr Anne Kinderlerer, digital health clinical lead, Royal College of Physicians, reflects on the findings from the College's 'View on digital and AI report' and asks whether the NHS has the foundations required to implement AI effectively and safely.
Digital
Artificial intelligence (AI) is rapidly moving from the theoretical to the operational in healthcare.
Government strategy now positions AI as a core part of the transformation of the NHS. Yet the central findings from the Royal College of Physicians’ (RCP) recent View on digital and AI report are clear: the NHS risks moving into the AI era without the foundations required to implement it effectively or safely.
While enthusiasm for AI is high, physicians' perceptions of institutional readiness are markedly low. A June 2025 snapshot survey of RCP members and fellows found that while seven in ten physicians support widespread implementation of AI in the NHS, almost the same proportion believe the health system lacks the digital infrastructure required to deploy it effectively.
This disconnect between physician demand and digital capability should matter deeply to health leaders. AI adoption is not a technology challenge. It is a leadership, governance, technology and regulatory challenge. Much like digital systems, the existence of AI alone does not result in improved patient care and streamlined workflows.
A snapshot of workforce capability and adoption of AI
If AI is to deliver meaningful change, the workforce must be confident and capable in using it. Clinicians need to understand not only how to use these technologies but also their limitations and risks. At present, the evidence suggests we are far from that. The RCP survey found that 79% of physicians say they need training in AI tools, yet 66% report having no access to such support.
At the same time, many doctors are already using personal AI tools in practice. The same survey revealed that 69% of 305 physicians reported using personal AI tools to ask clinical questions. This potentially exposes clinicians, organisations and patients to increasing risks around data security, clinical safety and accountability. Without NHS approved tools or NHS guidance, clinicians are turning to consumer AI platforms to fill a gap. But widely available AI tools are not designed or regulated for use in healthcare and using them in this way may be unsafe. There must be clear guidance about acceptable and appropriate use of AI in the NHS.
Leadership is key to successful AI adoption
Leadership capacity is also key. Digital and AI transformation in healthcare requires engagement and ownership from clinical leaders who understand both clinical practice and digital systems. While the potential for technology and AI to transform clinical pathways is accelerating, the pipeline of clinicians with the skills to lead this change is not keeping pace. It is vital that the health and care sector invests in professional development to drive better understanding of digital systems, data and AI as without it, we will not realise the potential benefits. To deliver digital clinical leaders of the future, the government must meaningfully engage and work in partnership with doctors and medical royal colleges on its reforms to medical curricula.
The fundamental challenge facing those clinical leaders is that many digital systems in the NHS are difficult to use, have limited interoperability and increase the burden on clinicians. As a consequence, datasets that might support the introduction of AI are incomplete and prone to bias. One of the most striking findings from the RCP survey is that 68% of UK physicians somewhat or strongly disagree that the NHS has the right digital infrastructure to support AI that will make a difference. It also found 70% of physicians identified that the main barrier to implementation was the inability of AI tools to integrate with existing systems, particularly the electronic patient record (EPR), while 65% cited the poor interoperability of systems.
Health leaders must get the digital basics right before pursuing widespread AI adoption. With 44% of physicians noting that poor functioning IT equipment negatively affected their wellbeing at work, without first fixing the foundations, AI risks becoming an expensive and ineffective layer of new technology that fails to deliver a meaningful return.
Before starting with AI, we must fix our EPRs
The EPR is now the core system that supports conversations between clinicians and patients. The functionality of EPRs is poorly standardised even where different trusts use the same supplier. We have heard of examples where even within a single EPR, the timeline of test results runs from left to right, or right to left, in different parts of the system, so that it's not obvious where to look for the newest results. Workflows may be widely different between trusts, and few systems are able to display relevant information from other EPRs. This leads to administrative inefficiency, increased cognitive load for staff and greater potential for error. It is perhaps not surprising that the introduction of an EPR into acute hospitals is associated with a significant fall in productivity.
There is clear evidence that poor usability increases risks to patients. As outlined in the HSSIB report on patient safety issues with EPR systems, many systems are not kept up to date in line with national standards, EPR system training is often perceived by staff to be limited, and variation in governance process mean that potential risks to patient safety are not always identified. Responses to safety incidents in the EPR are prone to creating safety clutter; building alerts in the system can add risk if they come at the wrong point in the workflow or if they fire too often so that staff are ‘trained’ to ignore them. There are very few examples of good digital dashboards that flag risk at unit, ward, hospital, neighbourhood or population level.
These foundations limit how quickly it is possible to develop AI tools that benefit staff and patients. Health leaders should not, in pursuit of quick fixes, allow AI to be used to automate broken processes. We need better standardisation of the EPR to work well in the NHS and standards for how data is recorded to create complete, standardised, accurate databases. The NHS should set an EPR model content specification standard that EPR providers must meet to ensure that their products meet NHS requirements.
Good implementation is key to successful AI adoption
Perhaps the most urgent challenge facing individual clinicians and NHS leaders is the absence of consistent regulatory frameworks for AI use.
The RCP survey found that 36% of physicians see a lack of regulation as a barrier to AI adoption. The emphasis on ‘human in the loop’ in order to ensure AI safety ignores the risk for this to instead become ‘fall guy in the loop’, where the physician is only acting as a liability sink for the AI decision maker. We need to consider how humans interact with computers to design systems that reduce the chance of mistakes. 73% of physicians said that their biggest concern about AI in clinical practice was the risk of error, highlighting the need for both better design and better evaluation of real-world implementation.
Ambient voice technology (AVT) offers the chance of successfully reducing the administrative burden of documenting and communicating patient care. These tools are known to hallucinate, occasionally introducing factually incorrect information into notes and letters. However, ‘occasionally’ is where the potential issue lies. If the system is correct 95% of the time, how do we ensure that clinicians check their notes and spot errors? Airport security systems have sought to address this kind of problem using a programme that inserts fictional images of threats into images of real items screened by x-ray machines, to ensure staff remain alert. Might introducing known errors maintain physician vigilance and help create an audit of effectiveness?
So, what next for NHS board leaders?
The recent NHS England AVT Supplier Registry is a welcome step in the right direction to supporting organisations in better digital procurement, but more is needed. We need at least a central repository of NHS-approved digital and AI algorithms and applications, alongside clear guidance from the NHS about the safe use of a wide range of AI tools. Boards need systems of assurance that their digital teams are procuring and implementing digital systems and new AI tools that are safe, effective and deliver the expected benefits.
As AI tools evolve at extraordinary speed, a firm emphasis on safety and regulatory compliance will no doubt work to alleviate the worries of those physicians who were particularly concerned about the risk of error with AI, and provide the assurance needed for NHS boards to empower teams to adopt AI tools safely, delivering improved outcomes for patients and better working lives for clinicians.
The central insight emerging from the RCP report is that AI adoption is not primarily about algorithms or software. It is about co-designing workflows with clinicians and patients that address significant problems in existing processes.
AI can improve clinical pathways, enhance diagnostic accuracy and reduce administrative burden. But without strong digital foundations and clear regulation and governance, this will just add to the new and poorly understood risks created by the introduction of existing digital tools.
For boards, the responsibility is clear. They must develop a workforce capable of implementing AI safely. They must invest in digital infrastructure that supports interoperability and high-quality data. And they must be supported by national regulatory and governance frameworks that enable innovation while protecting patient safety.
