As researchers and healthcare organizations continue to explore how artificial intelligence might support diagnostic efforts and save doctors time by automating some tasks, radiology has emerged as a field where the technology shows great promise.
AI and its subsets, deep learning and machine learning, are “being used in radiology in a number of ways, such as computer-aided detection for cancer, auto-segmentation of organs in 3D postprocessing, natural language processing to facilitate critical results reporting, consultation of best guidelines for recommendations, and quantification and kinetics in postprocessing,” according to Radiology Today.
This combination of AI-assisted data paired with human intelligence and insight is promising for the field.
“Adding information acquired from AI algorithms to our reporting and workflow can significantly improve patient care,” Dr. Bibb Allen, chief medical officer of the American College of Radiology’s Data Science Institute, tells the publication. “AI can find patterns in data that humans cannot see. This applies to image data, nonimage data such as predicting patient no-shows, or improving workflow.”
AI can also help train radiologists. For example, a team of researchers from the Massachusetts General Hospital and Brigham and Women’s Hospital Center for Clinical Data Science (CCDS) and the Rochester, Minn.-based Mayo Clinic are using generative adversarial networks (GANs) to train neural networks. Using NVIDIA’s AI platform, they’ve developed a deep-learning model that can generate accurate and reliable synthetic images of abnormal brain MRIs using public data sets to train an AI system.
“A model can essentially compress the information you have in the data to start with,” CCDS Director Adam McCarthy tells HealthTech. “The more data you have and the better quality you have, the better your results and model will be.”
But researchers have also discovered a more nefarious use for the technology.
The Concerning Security Vulnerability in Imaging Systems
According to a new study by cybersecurity researchers at Ben-Gurion University of the Negev in Israel, cybercriminals can alter 3D medical scans to remove existing medical findings or add false ones using deep learning. It’s possible, they said, that an attacker could use malware to modify 3D medical imagery using deep learning in order to commit insurance fraud, falsify research evidence or even murder someone by hiding cancer that would otherwise be treated.
Sound unbelievable? In fact, both radiologists and AI software were highly susceptible to CT-GAN’s image tampering attacks in the researchers’ covert penetration test, which was conducted in an active hospital network. The attack had an average success rate of 99.2 percent for cancer injection and 95.8 percent for cancer removal. The AI was fooled every time, while radiologists fared slightly better. But that could be due to diagnosis errors, such as missing an inserted nodule. Although knowledge of the attack can help mitigate some cases of cancer injection, the error rates and confidence scores suggest that the attack may not be reported in most cases, the researchers concluded.
How to Prevent and Identify Imaging Attacks
To guard against DICOM medical file tampering, administrators should secure data in motion by enabling encryption between the hosts in their picture archiving and communication system (PACS) network using proper SSL certificates.
“This may seem trivial,” the researchers wrote in their report. “But after discovering this flaw … we turned to the PACS software provider for comment. The company, with over 2,000 installations worldwide, confirmed to us that their hospitals do not enable encryption in their PACS because ‘it is not common practice.’” And some PACS don’t support encryption at all.
Here’s how researchers recommend staying ahead of imaging system hacks:
- To secure data at rest, administrators should keep servers and anti-virus software on modality and radiologist workstations up to date, and limit the PACS server’s exposure to the internet.
- To detect tampering, enable the field for applying a digital signature if your PACS software provider offers that feature. Then, administrators should check that valid certificates are being used and that the radiologists’ viewing applications are indeed verifying the signature.
- To test the integrity of images, add a digital watermark — a hidden signal embedded into an image that would be corrupted by tampering — to indicate a loss of integrity. Detecting photo response nonuniformity may be an easier method, because it only needs to be implemented at the endpoint viewing application.
As Offensive Security Tactics Evolve, So Must Defensive Tools
Cybersecurity best practices is a term that’s almost an oxymoron, given how often and rapidly attackers shift their tactics in response to new security solutions. For example, a generation of “polymorphic” malware, designed to elude security detection, is now assaulting networks worldwide.
The problem is that healthcare is particularly vulnerable to attacks, in part because large machines, such as MRIs, often run on old operating systems that no longer receive updates and patches.
This became clear during the 2017 WannaCry ransomware attack, which targeted computers running outdated versions of the Microsoft Windows operating system, encrypting data and demanding ransom payments in bitcoin. Good defensive practices include updating software whenever possible, backing up data regularly and isolating older machines so they can’t infect other systems if they’re breached.
“What has changed is the diversity of evasive tactics that attackers employ and the frequency with which they use them,” Lenny Zeltser of the SANS Institute recently told HealthTech. “Our adversaries aren’t standing still.”
Neither, experts agree, can healthcare organizations’ security tactics.