Close

New AI Research From CDW

See how IT leaders are tackling AI opportunities and challenges.

Aug 15 2025
Artificial Intelligence

6 AI Security Guidelines for Healthcare Organizations

Artificial intelligence tools can transform healthcare workflows, but security must always be top of mind. Follow these guidelines to ensure the secure and successful implementation of AI.

Amid clinician shortages and the rising cost of care, artificial intelligence tools are an attractive option for healthcare organizations to assist physicians, nurses, IT teams and support staff in their daily workflows.

However, with patient data at risk, AI tools must be implemented safely and securely to protect patient information and, in the worst cases, patient outcomes.

AI security isn’t just building a better firewall or using better passwords. It’s about understanding the risks, opportunities and limitations that come along with the use of AI, and that affects every stakeholder,” says Clara Lin Hawking, cofounder and executive director at Kompass Education.

Here are some suggestions to help healthcare organizations use AI securely.

Click the banner below to read the recent CDW Cybersecurity Research Report.

 

1. Deploy a Private Instance of an AI Tool

To secure AI in hospitals, Pete Johnson, CDW’s artificial intelligence field CTO, recommends using an in-house solution that lets clinicians and other staff experiment with an AI chat app without exposing data in the public sphere. Organizations can also work with a public model that has the right privacy protections in place.

“All of the Big Three hyperscalersAmazon, Microsoft and Google — have in their data privacy agreements that they will not use any of your prompt content to retrain models,” Johnson says. “In that way, you’re protected even if you don’t have that AI program on-premises. If you use it in the cloud, the data privacy agreement guarantees that they won’t use your data to retrain models.”

2. Establish an Action Plan in Case of an Attack

An action plan should detail what to do if a data breach occurs or if a mass phishing email circulates in a financial fraud attempt.

“It’s incredibly important for IT professionals to understand exactly what those new attack surfaces are and what they look like, and then start building a framework for addressing that,” Hawking says. “That includes everything — the hardware, software and actual IT architecture — but also policies and regulations in place to address these issues.”

EXPLORE: Address trust and privacy concerns to support full-scale AI adoption in healthcare.

3. Take Small Steps Toward AI Implementation

As healthcare organizations experiment with AI, they should start small. For example, they can use ambient listening and intelligent documentation to reduce the burden on physicians and clinicians.

“Don’t take your entire data estate and make it available to some AI bot. Instead, be very prescriptive about what problems you are trying to solve,” Johnson says.

4. Use Organization Accounts With AI Tools

Hawking warns against using personal email accounts to avoid creating entry points for data sharing that could be used to train models without consent.

5. Vet AI Tools No Matter Where They’re Used

Hawking also recommends that organizations create an oversight team to vet AI tools. The team could include stakeholders such as the IT department, clinicians and even patient advocates.

“It doesn’t mean lock down all AI, but understand exactly what’s being used and why it’s being used,” Hawking says.

UP NEXT: AI data governance strategies that will set you up for success.

6. Conduct a Complete Risk Assessment and Full Audit

A thorough risk assessment allows healthcare organizations to identify regulatory compliance risks and develop policies and procedures for the use of generative AI.

“It’s really important, as part of an AI audit, to get a proper overview how all of those things take place,” Hawking says. “That is the starting point of good governance.”

skynesher/Getty Images