AI Equals Human Experts in Medical Diagnosis, Study Finds | Artificial Intelligence (AI)
Artificial intelligence is on par with human experts when it comes to making image-based medical diagnoses, a study has found.
The potential of artificial intelligence in healthcare has generated excitement, with advocates saying it will ease pressure on resources, free up time for doctor-patient interactions and even aid in the development of tailored treatments. Last month the government announced £250million in funding for a new NHS artificial intelligence lab.
However, experts have warned that the latest findings are based on a small number of studies, as the field is littered with shoddy research.
A burgeoning application is the use of AI in the interpretation of medical images – a field that relies on deep learning, a sophisticated form of machine learning in which a series of labeled images are fed into algorithms that select features within them and learn to classify similar images. pictures. This approach has shown promise in diagnosing diseases ranging from cancers to eye conditions.
However, questions remain about the extent to which these deep learning systems measure up to human skills. Now, researchers say they’ve conducted the first comprehensive review of published studies on the matter and found that humans and machines are on equal footing.
Professor Alastair Denniston, from University Hospitals Birmingham NHS Foundation Trust and co-author of the study, said the results were encouraging, but the study was a reality check for some of the hype around the AI.
Dr Xiaoxuan Liu, the lead author of the study and the same NHS trust, agreed. “There are a lot of headlines about AI outperforming humans, but our message is that it can at best be equivalent,” she said.
Write in the Lancet Digital Health, Denniston, Liu and their colleagues explained how they focused on research articles published since 2012 – a pivotal year for deep learning.
An initial search found more than 20,000 relevant studies. However, only 14 studies – all based on human diseases – reported good quality data, tested the deep learning system with images from a separate dataset from the one used to train it, and showed the same images to human experts.
The team pooled the most promising results from each of the 14 studies to reveal that deep learning systems correctly detected a disease state 87% of the time – compared to 86% for medical professionals – and correctly gave the green light 93% of the time, compared to 91% for human experts.
However, healthcare professionals in these scenarios were not given additional patient information that they would have in the real world that could guide their diagnosis.
Professor David Spiegelhalter, chair of the Winton Center for Risk and Evidence Communication at the University of Cambridge, said the field was awash with poor research.
“This excellent review demonstrates that the massive hype about AI in medicine obscures the dismal quality of almost all evaluation studies,” he said. “Deep learning can be a powerful and impressive technique, but clinicians and commissioners should ask themselves the crucial question: what does it actually add to clinical practice?”
However, Denniston remained optimistic about the potential of AI in healthcare, saying such deep learning systems could serve as a diagnostic tool and help tackle the backlog of scans and images. . Plus, Liu said, they could prove useful in places that lack experts to interpret the images.
Liu said it would be important to use deep learning systems in clinical trials to assess whether patient outcomes improve compared to current practices.
Dr Raj Jena, an oncologist at Addenbrooke’s Hospital in Cambridge who was not involved in the study, said deep learning systems would be important in the future, but stressed they needed robust real-world testing. He also said it was important to understand why such systems sometimes score poorly.
“If you’re a deep learning algorithm, when you fail you can often fail in very unpredictable and spectacular ways,” he said.