Jan 10 2018
Software

Deep Learning Revamps How Radiologists Diagnose Diseases

Artificial intelligence is helping healthcare organizations and researchers blaze a trail beyond traditional radiology methods.

As a subset of artificial intelligence, deep learning is entering the radiology scene with the possibility to solve large data challenges and automate many smaller tasks. This is the goal of several research organizations that are hoping to move deep learning out of the early days, where most healthcare organizations are still not equipped to use the tech, and into normal practice, where it can help diagnose diseases more accurately and quickly.

At the Massachusetts General Hospital and Brigham and Women’s Hospital Center for Clinical Data Science (CCDS) in Boston, for example, researchers are using an NVIDIA integrated system to train deep neural networks to quantify biological tissue more accurately than a human.

“If implemented properly, deep learning and related technology could prove a boon to the profession rather than a threat, automating simpler, more routine tasks and allowing radiologists to focus their time and energy on more complex, ambiguous and high-priority aspects of the job,” HealthTech reports.

The center isn’t alone, however, and more researchers than ever are getting in on training supercomputers to more accurately and easily identify abnormal scans, as well as automate tasks such as prioritizing patients.

SIGN UP: Get more news from the HealthTech newsletter in your inbox every two weeks!

Researchers Take Deep Learning to the Next Level for Radiology

The Ohio State University Wexner Medical Center is one such organization. The research lab at OSU Wexner uses three supercomputers, which can run deep source frameworks, to help prioritize imaging studies and train computers to focus in on abnormal images, Luciano Prevedello, division chief in medical imaging informatics at OSU Wexner, tells Health Data Management.

“One of the problems is that 40 percent of inpatient studies are (ordered with high priority), so how do you sort them and know which ones should really be done first?” Prevedello tells the site.

Researchers hope to train the supercomputers on which images are abnormal by inputting images from previous scans already studied by humans. Already, the computer is 91 percent accurate in determining abnormal scans and 81 percent accurate in determining stroke cases. Eventually, Prevedello hopes that the computers will be able to use what they’ve been “taught” to determine which cases are the highest priority for clinicians.

The idea here is to make our scanners more intelligent,” Prevedello tells Health Data Management.

Beyond OSU Wexner, Stanford University Medical Center and the Mayo Clinic are also putting supercomputers to work with the aim to eventually automate many simple radiology tasks.

The Mayo Clinic’s Radiology Informatics Lab, which focuses on sifting out image-derived biomarkers of disease, is developing a deep learning tool that can mine information from medical images for researchers.

Meanwhile, Stanford University’s High Performance Computing Center is inputting data from its electronic health records, genomics data, biobank and imaging studies to improve image labeling and cohort selection.

“We have software that does kind of a Google search of radiology reports,” Curtis Langlotz, professor of radiology and biomedical informatics at the Stanford University Medical Center, tells Health Data Management. “It’s not exact, but it’s a good way to get a sense of how many cases have a phrase in reports. When we automate this notion of labeling, it’s not perfect but we look at it as kind of a pipeline to enable further research.”

farakos/Getty Images
Close

Become an Insider

Unlock white papers, personalized recommendations and other premium content for an in-depth look at evolving IT