A recent debate hosted by the Digital Health Networks looked at beliefs about the use of AI into healthcare and concluded that AI can be integrated safely into clinical use where proper guidelines are in place.
Speaking for the motion “that AI is not yet ready for widespread clinical use”, Dr Nisha Sharma, director of breast screening and clinical lead for breast imaging at Leeds Teaching Hospital NHS Trust and Dr Marcus Baw, locum GP and “General Hacktitioner”, said that while they believed in the use of AI in clinical workspaces, they were concerned that some parts of the health system were unprepared for their use, and some aspects of the technology also weren’t sufficiently developed to be reliable.
“I’m active in research using AI, and when I think of artificial intelligence, I think you’ve got algorithms that are low-risk and high-risk,” Sharma said. “And this [motion] is really focused on the high-risk algorithms, which are making clinical decisions.”
Key challenges to using clinical AI
Sharma listed a number of challenges she says are making it more challenging to integrate the use of AI into the health service. The lack of interoperability between different organisations is one obvious factor, she said.
“We’re all working with systems that don’t communicate or that are slow, so I feel that if we’re going to start using AI within our clinical setting, there needs to be a lot of investment to make sure we’re on the most up-to-date software and we’ve got software that can cope with the new algorithms,” she said.
A second challenge is a lack of expertise within NHS organisations, Sharma said, adding “yes, we have data analysts, we have people working in informatics, but do we have experts in understanding how these AI algorithms work?” Part and parcel of addressing this problem, she said, is applying quality assurance.
The final component of assessing readiness, Sharma said, is an understanding of how the algorithms themselves will be used, and if they are going to be fixed or adaptive as more data is acquired.
Noting that the health service will also have to educate the general public about how AI is likely to be used in healthcare, Sharma concluded: “I recognise artificial intelligence is important, but we’re not ready for widespread use because we do not have the infrastructure, the monitoring and the expertise on site.”
Baw focused on the lack of overall guidelines for the use of AI in clinical settings, observing that with experts currently petitioning governments around the world for better regulation, “it’s very difficult to openly support embedding AI tools into the functioning of a system that’s so important to our well-being as the NHS”.
More time and research are necessary before the health service can understand the potential risks associated with clinical uses of AI, Baw said, noting that clinicians can be held responsible by the General Medical Council (GMC) for decisions they make as well as by civil and criminal courts. “We’re the ones who will eventually carry the can for decisions made using AI,” he added.
Baw also said he was concerned about the future use of proprietary data sets in AI algorithms.
“One of the things that really concerns me about AI is that while the actual machine-learning algorithms can be open source, unless the training data is also open source and the actual final model, once it has been trained, is also open source. Then we risk entire swathes of medical practice becoming proprietary,” he added.
Propriety AI could also open future clinicians to legal risk if they criticise a treatment modality, he said.
An argument that AI is already entrenched
Opposing the motion, Dr Tracy O’Regan, professional officer of clinical imaging and research at the Society and College of Radiographers, noted that her profession had long had standards set around new technology, including AI, that required everyone working within it understand what deep learning is and how it works.
Different parts of the health, and social care system have different needs and functions, she said, and “not all need AI and tech.”
In more traditionally technological specialities such as radiography, she said, AI has been in place for a number of years. She added, “many of us in radiology and clinical imaging and pathology, for example, are familiar with this because our data is digital.”
Regan also reflected on the tendency of discussions of AI to view it as an “existential threat” where in fact, she said, “it is an existential opportunity for the NHS.”
“I’m arguing that AI is not yet ready for widespread clinical use, but it’s ready for clinical use in areas of clinical need,” she concluded.
Haris Shuaib, consultant physicist and head of clinical scientific computing at Guy’s and St. Thomas’s NHS Foundation Trust and AI transformation lead for London Medical Imaging & Artificial Intelligence Centre for Value Based Healthcare, echoed the belief that because AI is already embedded in parts of the NHS, it is part of routine care.
“There are already about 600 FDA-approved algorithms on the market and some of them have been available for over a decade,” he said. “There are approximately 300 CE-marked AI software products that are available on the market for clinical use. This is just doing normal clinical business with the help of software and data-driven technologies.”
One of the initial AI successes, he said, has been in stroke care, with 85% of stroke units in the country using AI as part of routine care.
Shuaib acknowledged that the use of AI algorithms can be controversial, such as when AI has disagreed with the opinions of doctors, as well as cases where AI itself has identified poor clinical performance.
He also dismissed the argument that AI is an “ivory tower activity,” concentrated in academic teaching hospitals only.
“In fact, a lot of our district generals have led the charge in adopting AI,” he said. Shuaib also sought to upend the notion that AI was insufficiently developed for the health system.
“AI definitely is ready, but whether the NHS is ready to adopt AI is a different challenge,” he concluded, before telling the debate audience that the NHS was “ahead of the pack compared to the rest of the world when it comes to delivering training to healthcare professionals for AI.”
Ultimately, the debate speeches convinced the online audience to reverse their opinion of the technology in a final vote. While attendees originally supported the motion 54% to 46%, those numbers were reversed exactly by the end of the debate, with 46% voting against.
This post originally appeared on TechToday.