HEALTHTECH: How does your organization currently use generative AI?
LONGHURST: We’re using AI for patient communication. Researchers from UC San Diego published a study in JAMA showing that licensed physicians and nurse practitioners, on average, rated AI answers to patient questions to be of better quality and to contain higher empathy than human answers. I reviewed the answers myself and I can tell you that it was super obvious which were the chatbot answers and which were from doctors. The chatbot answers would write three paragraphs and the doctors would write three sentences.
JOHNSON: We’re looking at ways to replace the time that clinicians spend documenting visits with technologies that can generate content from audio. Working with Epic, we’re also looking at ways that we can automatically generate responses to patient portal messages. That will all be available to most of Epic’s clients in the next year or two.
DISCOVER: How can healthcare balance the reward and risk of AI?
HEALTHTECH: What role do humans play in AI-generated communication?
GOSWAMI: We all know the time it takes to respond to every email in our daily lives, and that was a positive driver for us to adopt AI technology. Generative AI allows our clinicians to write a more comprehensive response that builds in a bit of empathy. The patient doesn’t just get a test result; they also get the conversation that goes with the result that the provider has updated based on a draft generated by AI.
LONGHURST: Our doctors are drowning in inbox overload. In some cases, they’re getting a message every minute. Generative AI is a mechanism to help solve that problem. After AI generates a draft answer to a patient question in the electronic health record, the physician decides the next step. There are two buttons: one that says, “Start with draft,” and the other says, “Start blank reply.” We’ve always made sure there is a human in the loop.
JOHNSON: What we can expect is that although patients may find these AI-generated messages more readable or assuring, it’s also entirely possible there will be something about them that they feel is condescending or that’s culturally different than what the message writer might be trying to convey. If it turns out that the messages harm someone, we need to have a process to address that.
BERGER: Instead of talking about AI-generated content, you might want to use the term AI-informed content. If AI generates 95 percent of a document but a human polishes the last 5 percent, is that theoretically a human-created document or an AI-created one? More important, does it get the patient the answers they need?