The past year saw increased interest in the use of generative artificial intelligence in healthcare. Although generative AI has been hailed as a technology likely to spur the “next productivity frontier,” there have also been reports of AI-produced “hallucinations” and diagnosis errors.
Most healthcare organizations aren’t exactly confident about the best way to implement generative AI safely and effectively. A 2023 Bain study found that only 6 percent of health systems have a strategy to implement generative AI.
On the patient side, more than 60 percent of Americans said they would be uncomfortable with their provider relying on AI to direct their care, according to a 2023 Pew Research Center survey.
Still, several large health systems are successfully piloting generative AI programs, such as Microsoft and Epic’s partnership with UC San Diego Health, UW Health and Stanford Health Care to integrate AI to help answer patient messages.
How are industry leaders facing the challenges of generative AI applications in a healthcare setting? HealthTech spoke to Dr. Christopher Longhurst, chief medical officer and chief digital officer at UC San Diego Health; Cherodeep Goswami, chief information and digital officer at UW Health; Dr. Kevin Johnson, vice president for applied informatics at Penn Medicine and a member of the Health Care Artificial Intelligence Code of Conduct steering committee; and Eric Berger, partner in the healthcare and life sciences practice at Bain.
Click the banner to discover how health IT solutions can help create an integrated care experience.
HEALTHTECH: How does your organization currently use generative AI?
LONGHURST: We’re using AI for patient communication. Researchers from UC San Diego published a study in JAMA showing that licensed physicians and nurse practitioners, on average, rated AI answers to patient questions to be of better quality and to contain higher empathy than human answers. I reviewed the answers myself and I can tell you that it was super obvious which were the chatbot answers and which were from doctors. The chatbot answers would write three paragraphs and the doctors would write three sentences.
JOHNSON: We’re looking at ways to replace the time that clinicians spend documenting visits with technologies that can generate content from audio. Working with Epic, we’re also looking at ways that we can automatically generate responses to patient portal messages. That will all be available to most of Epic’s clients in the next year or two.
DISCOVER: How can healthcare balance the reward and risk of AI?
HEALTHTECH: What role do humans play in AI-generated communication?
GOSWAMI: We all know the time it takes to respond to every email in our daily lives, and that was a positive driver for us to adopt AI technology. Generative AI allows our clinicians to write a more comprehensive response that builds in a bit of empathy. The patient doesn’t just get a test result; they also get the conversation that goes with the result that the provider has updated based on a draft generated by AI.
LONGHURST: Our doctors are drowning in inbox overload. In some cases, they’re getting a message every minute. Generative AI is a mechanism to help solve that problem. After AI generates a draft answer to a patient question in the electronic health record, the physician decides the next step. There are two buttons: one that says, “Start with draft,” and the other says, “Start blank reply.” We’ve always made sure there is a human in the loop.
JOHNSON: What we can expect is that although patients may find these AI-generated messages more readable or assuring, it’s also entirely possible there will be something about them that they feel is condescending or that’s culturally different than what the message writer might be trying to convey. If it turns out that the messages harm someone, we need to have a process to address that.
BERGER: Instead of talking about AI-generated content, you might want to use the term AI-informed content. If AI generates 95 percent of a document but a human polishes the last 5 percent, is that theoretically a human-created document or an AI-created one? More important, does it get the patient the answers they need?
HEALTHTECH: What are some key security and privacy considerations?
JOHNSON: We should not be putting patient-specific or protected health information into any of these generative models yet. Let’s say I’ve got a patient who’s coming in with a complex medical condition, and I've discovered that ChatGPT can accept all of this patient data. And then I type, “Let’s chat about this patient.” I will have now given that entire private record completely to GPT. We shouldn’t be doing that.
GOSWAMI: Right now, Microsoft and Epic create a very safe and secure cloud environment that we, as a client, can leverage. But the true critical success factor is for the organization to assess its own culture of acceptance, responsibility and accountability. This is a very powerful tool if used appropriately. Used inappropriately, it can damage the reputation of the organization and of the providers and compromise the data of the patients they are working with.
LONGHURST: UC San Diego was recently awarded a $10 million grant to improve healthcare cybersecurity. We’ve always treated our patient data like gold, and those processes don’t change when it comes to AI. With our AI efforts, all of the data that is informing the algorithm is not leaving our walls. When we work with vendors, we bring them into our secure environment.
BERGER: We’re seeing healthcare organizations extend their policy and compliance documents to cover this kind of technology. Some have created an AI council or chief AI policymaker position.
READ MORE: Learn three keys to success with a generative AI platform.
HEALTHTECH: What infrastructure needs should healthcare institutions consider before implementing generative AI solutions?
GOSWAMI: Even if organizations don’t have their EHRs up to date, to get to a current compatible version of Epic doesn’t generally require a lot of effort, unless you are really outdated and haven’t taken any patches or upgrades in the past few years. I would be surprised if fewer than 90 percent of the clientele out there didn’t meet the minimum standards of Epic or Microsoft.
BERGER: In general, people are building AI applications on their existing technology infrastructure. The cloud providers are all aligning themselves with different large language models — Microsoft with OpenAI, Amazon and AWS with Anthropic. Although you can still use LLMs with multiple clouds, the interplay between the cloud player and the LLM needs to be considered.
75 percent
The percentage of health system executives who believe that generative artificial intelligence could reshape the industry; only 6 percent have a strategy to implement AI
Source: bain.com, “Beyond Hype: Getting the Most Out of Generative AI in Healthcare Today,” Aug. 7, 2023
HEALTHTECH: How can health systems successfully implement generative AI?
GOSWAMI: It’s a brand-new technology and we need to understand where it may fall short. For example, if a cancer is detected, the patient doesn’t need a six-paragraph, auto-generated email on all the options of financial assistance that are available. No. The provider needs to pick up the phone and have a conversation.
JOHNSON: Most people in IT — and health IT — know about the triangle of people, process and technology. But most people don’t know that the IT infrastructure we need starts with people. People have to be willing and trained to use AI. And using AI is not completely free. Even though it saves people time, there is a cost involved with every single message generated by AI.
LONGHURST: These are not technologies that should be turned on immediately without testing for all patients. You need partners who are committed to carefully evaluating and ensuring that there aren’t unintended consequences. Share results in publications and at vendor conferences so that those lessons learned become standard functionality.