“In hospitals, you are talking about a huge amount of storage. They’re doing so much more with video today and using AI to understand the video stream,” says Haney. “Did the patient get up out of bed? Are they a fall risk? Should I rush somebody over there? These types of video solutions are throughout the hospital.”
To accommodate this dynamic technology, healthcare organizations must make plans for its use and growth. One of the most important components to consider is the data users are feeding it and the governance of that data.
The Risk of Sharing Sensitive Data with Public AI Platforms
As clinicians and administrators embrace generative AI platforms, IT professionals must find ways to ensure that sensitive data isn’t being shared publicly. Users need ways to explore large language models without disclosing any of their data.
“First, we do a data governance check. What kind of data are you going to be using? What are the controls around that data? Then we can design a solution that allows you to keep your data in-house and not expose any of it,” says Haney.
Data governance is key for organizations looking to prepare their infrastructure and users for AI and LLMs.
“We have a workshop called Mastering Operational AI Transformation, or MOAT,” Haney says. “You’re drawing a circle around the data that we don’t want to get out. We want it to be internally useful, but we don’t want it to get out.”
To ensure data security, partners such as CDW can help organizations set up or build cloud solutions that don’t rely on public LLMs. This gives them the benefits of generative AI without the risk.
“We can set up your cloud in such a way that we’re able to use a prompt to a make copy of an LLM,” Haney explains. “We build private enclaves containing a chat resource to an LLM that people can use without a public LLM learning the data they’re putting in.”