Close

See How Your Peers Are Moving Forward in the Cloud

New research from CDW can help you build on your success and take the next step.

Dec 19 2024
Artificial Intelligence

Providers Must Consider Security, Efficiency and Transparency When Using LLMs

Large language models combined with retrieval-augmented generation can create more accurate, transparent and trustworthy responses from generative AI tools in healthcare.

Artificial intelligence was recently named the most exciting emerging technology in the healthcare industry, and it’s easy to understand why. It has the potential to change healthcare dramatically, from improved administrative processes to drug discovery.

Some of the most profound impacts of AI in healthcare are already happening in clinicians’ offices. With clinical knowledge doubling every 72 days, it can be very difficult for doctors to keep track of updated clinical practice guidelines and keep the latest recommendations top of mind. Generative AI and large language models have the potential to overcome this provider burden. LLMs can ingest great amounts of data from different sources and distill this information into easily understandable insights that doctors can consider when treating patients at the point of care.

However, a recent McKinsey study revealed that many healthcare providers are worried about generative AI risks. For example, there are concerns about exposing patients’ data and creating possible HIPAA violations, and there are questions about the sources of LLM training data. The chance of exposure increases with public LLMs such as ChatGPT, which are not HIPAA-compliant and allow multiple customers to share the same resources.

In response, some healthcare organizations may consider training their own LLMs on patient data, but that’s expensive, time-consuming and requires specialized expertise. This route also puts them at risk of being locked into their models and restricted from trying new and more powerful LLMs. Finally, once a model is trained it can be difficult to discover the source of its recommendations, which could raise reliability questions.

Click the banner below to find out how infrastructure modernization increases healthcare agility.

 

Healthcare organizations need more than just private and secure AI enclaves. They need flexibility to leverage different LLMs, transparency to trace recommendations back to models’ sources, and complete control over those sources and data.

Supplementing LLMs with retrieval-augmented generation provides these benefits and more. This approach gives healthcare providers a cost-effective and secure way to retrieve accurate and specific information about a patient’s case in real time, in a way that’s easy to understand.

How Retrieval-Augmented Generation Works

RAG supplements a private LLM’s data with other knowledge sources chosen by the healthcare organization. These sources can include patient records, clinical guidelines or other types of data. RAG directs an LLM to pull information from these sources without exposing the data from the sources to the outside world.

This provides two benefits. The general knowledge contained within the LLMs is supplemented with information specific to the needs of the healthcare provider or individual clinician. Combining the LLM’s natural language processing with the healthcare organization’s curated information results in more complete and targeted patient care. Second, sensitive patient or provider information can remain on-premises or in a private cloud without being exposed to a public database, helping to mitigate risk.

Let’s say a patient experiences dizziness and cannot get an appointment with an ear, nose and throat specialist in a reasonable amount of time, so they visit their general practitioner looking for help. Normally, the GP might need to make an educated guess based on the patient’s symptoms. However, with a RAG-supplemented LLM, he or she can type a simple query into their laptop and receive personalized and up-to-date recommendations on treatment options based on the latest clinical practice guidelines issued by the American Academy of Otolaryngology-Head and Neck Surgery.

RAG does more than just help clinicians provide more informed care, though. It can trace answers back to information sources, so users can easily verify if the information it provides is accurate, and it can create audit trails if necessary. And it helps save costs by eliminating the need to retrain and fine-tune models; organizations can simply introduce new data into their existing LLMs as needed. They can even swap out LLMs as new models are introduced, avoiding LLM lock-in.

DISCOVER: Here are three areas where RAG implementation can be improved.

LLMs Can Help Fix Healthcare’s API Problem

Using LLMs and RAG for clinical decision support is an obvious use case for generative AI in healthcare. Still, it’s far from the only example. Most LLMs are transformers or translators by nature. They have the potential to address data interoperability problems caused by poorly written or malformed application programming interfaces.

APIs have long been the bane of and a boon for the healthcare industry. Providers rely on them to process insurance claims, among other things, but many claims are rejected due to an API not understanding the request.

Instead of submitting a request through an API and hoping it gets accepted, providers can submit data to an API backed by an LLM service. The LLM translates the data into the format expected by the receiver, minimizing the chances that the claim will be rejected.

In this case, the LLM isn’t being asked to do anything overly complex. However, the overall impact can be profound for the healthcare provider and the patient. This could reduce costs associated with pursuing claims denials, which one survey estimates are more than $10.5 billion annually, and increase patient satisfaction.

PREPARE: Demystify artificial intelligence adoption for your healthcare organization.

LLMs Can Alleviate Budget and Patient Concerns in 2025

As we enter 2025, we’re finally reaching the top of the generative AI hype cycle in healthcare, and its use cases have become clearer. Now is a great time for organizations to take stock of how they intend to leverage LLMs for maximum effectiveness.

That assessment will likely impact IT budgets for next year. Building in-house LLMs is expensive and may not be the best solution as healthcare institutions strive to cut costs due to rising financial pressures. A combination of open source LLMs and RAG methodology is a more cost-effective option.

More importantly, the combination will allow clinicians to provide patients with more targeted and accurate care. Doctors can retrieve information and answer questions quickly, patients can go home with treatment plans, and the promise of generative AI at the point of care can be fulfilled.

The best photo for all / getty images