Close

Join the Insider Program

Explore exclusive HealthTech coverage and enjoy early access to the latest stories.

Jul 16 2024
Software

Considerations for the Responsible Use of Generative AI in Healthcare

Panelists discussed use cases, risks, guardrails and data transparency related to generative AI at the recent AWS Summit.

Generative artificial intelligence has the potential to be a helpful tool for healthcare organizations, especially when it comes to increasing efficiency in clinical and operational settings. However, many generative AI tools out there lack transparency regarding the data sets they’re trained on, which could lead to biases.

As the industry moves toward adoption and expanded generative AI use cases, organizations must be prepared to implement governance and processes created with all stakeholders at the table.

During June’s AWS Summit in Washington, D.C., AI and population health experts discussed the benefits of generative AI tools as well as the guardrails needed to ensure these models don’t harm patients or communities.

PREPARE: Expert guidance helps healthcare organizations achieve meaningful transformation with AI.

Ways Healthcare Can Benefit from Generative AI

There are several ways healthcare organizations can benefit from generative AI tools. Ambient listening can generate clinical notes based on conversations between physicians and their patients. Monica Coley, senior business development manager at Amazon Web Services, said that generative AI-powered chatbots can help handle the volume of calls coming into contact centers. These chatbots can help some patients with their concerns when appropriate, with no human interaction required.

Additionally, some agencies and departments in the federal government are still using data on paper locked in filing cabinets, according to Dawn Heisey-Grove, senior federal public health account manager at AWS. She said that using AI services can speed up the process of digitizing those files while a human verifies accuracy.

Another example of how AI tools can help healthcare organizations is by reformatting data or research to be compliant with specific standards. Heisey-Grove points out that the organization may store data one way but need to share research with another entity in a different way for grant approval. She said that AI paired with human validation can accelerate the process of reformatting that data.

Click the banner below to learn how a modern data platform supports decision making.

 

Applying Guardrails to Generative AI Use Cases in Healthcare

Dr. Leith States, chief medical officer at the U.S. Department of Health and Human Services, described generative AI as a tool set that potentially could be applied to a variety of public health system challenges. However, one thing he said he finds concerning is a lack of governance in the U.S. While there’s been a movement to press forward with utilization, there hasn’t been an intentional approach to govern the technology, according to States.

In October 2023, the White House released its executive order on the use of AI. States said this document was highly prescriptive and lacked agility. However, HHS has adopted tenets from the order and created an AI task force and playbook.

“It’s starting to look like there’s coordination around a shared understanding of what it is we’re driving toward, and that’s been very refreshing,” he added.

Patricia MacTaggart, teaching instructor and program director at George Washington University, compared healthcare’s adoption of AI with that of electronic health records.

“We didn’t have a common understanding. We were in the same meeting talking about two different things,” she explained.

MacTaggart recommended creating a framework for discussions about AI to help the industry navigate conversations around implementation, whether the use is intended for patient engagement, clinical workflows or administrative efficiencies.

“All of those use cases need some guardrails, so we know the minimum bar but also the possibilities that we’re seeking and the realities of today,” she added. “This is innovation. There’s going to be some evolution. Some things are going to go right, and some aren’t. We need to understand those risks and apply each of those guardrails to each use case that we’re doing.”

EXPLORE: Three tips can help you master operational AI transformation.

The Importance of Data Quality and Transparency

“Data is the underlying foundation of successful generative AI,” said Coley, who explained that the more data included in the model, the more representative it is likely to be of the population at large. “It will provide you better models of prediction to train off of to make AI more effective and not a hindrance.”

MacTaggart pointed out that while healthcare organizations need good data for successful AI, they need a solid infrastructure to support data.

However, when it comes to generative AI tools, healthcare professionals may not have much information into the data set the model was trained on.

Heisey-Grove said the industry needs to start asking questions and demanding more transparency. She also suggested that organizations build their own knowledge bases and begin to provide their own data to large language models so that they can provide the right kinds of answers. While that won’t solve larger cultural issues or biases, it does help those engaging with the AI to know what data is involved and what data is missing. Heisey-Grove also recommended having a human verify the results.

“We can’t trust it on its own just yet,” she said.

Monica Coley
We need people at the table who are always evaluating data for bias and bringing another lens.”

Monica Coley Senior Business Development Manager, Amazon Web Services

Coley emphasized having the right people at the table to prevent bias and ensure representative data. If healthcare organizations are building models, she said, data scientists, health equity experts and a diverse group of people should be involved.

“We need people at the table who are always evaluating data for bias and bringing another lens,” she said.

States agreed, adding that it’s important to develop processes reflective of reality and not theory, which could be biased. Having the wrong people and the wrong data involved in a data model can lead to unreliable results.

As these tools get deployed across healthcare, MacTaggart said, it’s important to keep in mind that organizations should provide the right care at the right time with the right providers, modalities and appropriate use of algorithms.

Another reason healthcare organizations should be cautious about generative AI implementation is that not all healthcare professionals have the knowledge they need to engage with AI in a meaningful and responsible way. The industry needs to be realistic about how quickly it can implement these tools, MacTaggart said.

As public and population health professionals approach generative AI use, Heisey-Grove said, they should think of clinical data as one of many sources of data available within a community. It’s important to consider multiple types of data sources to create a more holistic picture of public and population health.

MacTaggart recommended that organizations start with administrative use cases that present less risk to patients. Having wins in that area can build trust as these organizations move toward use cases with more risk.

UP NEXT: An integrated data strategy supports healthcare AI success.

PeopleImages/Getty Images