It’s a no-brainer that technology can be instrumental to enhancing the patient experience, improving organizational efficiencies and, most important, saving lives. But ensuring that the data that fuels all of those things is stored, shared and accessed in a manner that is secure and complies with privacy laws is an ongoing challenge for healthcare organizations.
A recent Ponemon report found that a whopping 75 percent of providers said their IT security teams are understaffed and that they struggle to attract qualified candidates. On top of that, 90 percent of healthcare organizations have suffered a breach, according to Ponemon.
At the same time, cyberthreats are increasing in frequency and sophistication, in part because stolen medical data is even more valuable on the black market than credit card numbers.
“The numbers are staggering,” says Anne Genge, CEO and co-founder of data and security compliance firm Alexio Corporation and a Certified Information Privacy Professional.
“Scheduling patients, managing staff, keeping up with the ever-evolving technologies and techniques — there are plenty of pressing things to worry about as a healthcare provider,” she says. “That means cybersecurity is sometimes low down on the daily totem pole of never-ending tasks. Hackers know this and they’re taking full advantage.”
Artificial intelligence, with its ability to automate processes, is emerging as one tool that can help overburdened IT departments. AI can analyze, anticipate, defend against, identify and respond to cyberattacks, viruses and other threats. It can even perform rote but important security tasks such as reminding users to reset their passwords.
Skip Rollins, CIO of Joplin, Missouri–based Freeman Health System, says AI automation shows a lot of promise. “Learning software is great, because you solve a problem one time and it remembers how to solve it and it moves on,” he says. Freeman uses AI to monitor network activity and automatically flag unusual behavior, such as a large file download on the computer of a user who doesn’t typically download large files.
“You can look across the network for anomalies and how traffic is moving, which gives you better eyes on things,” he says.
Install AI as a Bodyguard for IT Security
The IT world is split on whether it’s a good idea or a bad idea to use the speed and scale of AI when it comes to organizational security. On one hand, cyberattacks are automated, so systems should be fighting fire with fire, Genge says.
“In the realm of cybersecurity, IT capabilities can be significantly augmented through intelligent automation,” says Nick Semple, a partner at PA Consulting who specializes in healthcare. Intelligent automation is a blended use of rule-based automation and sophisticated machine learning techniques and algorithms that can help detect and deter bad actors. “Though there are many ways to use AI to help prevent cyberattacks, the high-level principle is the same: The system monitors for anomalous behavior (intelligence), and immediately blocks it (automation).”
Organizations “are beginning to look at modern device management for their Windows PCs and laptops that can provide secure Azure Active Directory service along with real-time anti-virus protection and security capability,” Semple says.
“That reduces the need for multiple security products on laptops, ensures settings and configuration and signature files are up to date when logging on, and enables immediate or rapid deployment of patches automatically, as opposed to waiting for internal staff to review and deploy them. That together with user access and multifactor authentication are critical.”
Another benefit of AI-fueled automation, he notes, is that it frees up analysts to spend more time on “higher-value activities” such as serving and educating end users and using data analytics to work on projects that can improve efficiency, workflow, and clinical quality.
But some say personal health data is too valuable to risk on AI.
“It’s an interesting idea in concept, but putting it into practice is a little bit dicey,” says Ken Dort, a partner in the intellectual property group at Drinker Biddle and chair of the law firm’s data security and technology committees. “It’s comparable to autonomous cars. Yeah, the idea works really well, but do you want to be the guy in the car on the highway going 70 miles per hour with no one in the front seat?”
Social engineering, by definition, is designed to fool humans, Dort says. But that doesn’t mean AI is immune.
“What is to keep the AI piece from itself being compromised in some kind of quasi-social-engineering effect? Given the squishiness of cybersecurity and how it morphs and changes all the time, I’m not sure AI would be up to that challenge,” he says.
How to Use a Mixed-AI Approach to Security
At Freeman, Rollins is striking a balance by using a blend of tools, some powered by AI, some not.
“There’s generally a 50/50 split with folks who have done what we do. A best-of-breed, targeted strategy: using a tool built to do one thing and do it well,” he says. Others think it’s better to have more tools with a single vendor because it’s easier to maintain and sustain them, he adds — the single-pane-of-glass approach.
Using multiple tools is more labor-intensive and often more expensive, he says. But his focus is on the efficacy of those solutions rather than convenience. “We’re attracted to vendors who do things differently,” he says. “Our whole strategy is built around finding tools that address a specific problem.”
One such tool he uses runs on desktops and creates a virtual memory that looks like the machine’s real memory. When a hacker or other threat looks in the morphed memory to find programs to attack, the program traps and isolates the threat.
“I believe that we should aggressively pursue new technology that’s out there. We tend to go to a safe place and live there sometimes,” Rollins says. “I’m very aggressive about evolving vendors’ tools to be more in line with what’s trending and figuring out how it helps you solve problems.”
The stakes are high. Organizations must protect personal health and other data, but must also protect reputations. “My number one goal is to keep my name out of the paper,” Rollins says. “If I can do that, then everything will be OK.”