Applying Guardrails to Generative AI Use Cases in Healthcare
Dr. Leith States, chief medical officer at the U.S. Department of Health and Human Services, described generative AI as a tool set that potentially could be applied to a variety of public health system challenges. However, one thing he said he finds concerning is a lack of governance in the U.S. While there’s been a movement to press forward with utilization, there hasn’t been an intentional approach to govern the technology, according to States.
In October 2023, the White House released its executive order on the use of AI. States said this document was highly prescriptive and lacked agility. However, HHS has adopted tenets from the order and created an AI task force and playbook.
“It’s starting to look like there’s coordination around a shared understanding of what it is we’re driving toward, and that’s been very refreshing,” he added.
Patricia MacTaggart, teaching instructor and program director at George Washington University, compared healthcare’s adoption of AI with that of electronic health records.
“We didn’t have a common understanding. We were in the same meeting talking about two different things,” she explained.
MacTaggart recommended creating a framework for discussions about AI to help the industry navigate conversations around implementation, whether the use is intended for patient engagement, clinical workflows or administrative efficiencies.
“All of those use cases need some guardrails, so we know the minimum bar but also the possibilities that we’re seeking and the realities of today,” she added. “This is innovation. There’s going to be some evolution. Some things are going to go right, and some aren’t. We need to understand those risks and apply each of those guardrails to each use case that we’re doing.”
EXPLORE: Three tips can help you master operational AI transformation.
The Importance of Data Quality and Transparency
“Data is the underlying foundation of successful generative AI,” said Coley, who explained that the more data included in the model, the more representative it is likely to be of the population at large. “It will provide you better models of prediction to train off of to make AI more effective and not a hindrance.”
MacTaggart pointed out that while healthcare organizations need good data for successful AI, they need a solid infrastructure to support data.
However, when it comes to generative AI tools, healthcare professionals may not have much information into the data set the model was trained on.
Heisey-Grove said the industry needs to start asking questions and demanding more transparency. She also suggested that organizations build their own knowledge bases and begin to provide their own data to large language models so that they can provide the right kinds of answers. While that won’t solve larger cultural issues or biases, it does help those engaging with the AI to know what data is involved and what data is missing. Heisey-Grove also recommended having a human verify the results.
“We can’t trust it on its own just yet,” she said.