HEALTHTECH: What are some of the AI implementation challenges healthcare organizations face?
POON: What I’ve learned over the years is that when it comes to technology, and AI is no exception, many things that could work and should work aren’t going to pan out for a variety of reasons. Part of it may be the technology, the organization’s readiness or whether the end users are appropriately prepared.
What we have found at Duke is that it is important to embrace a “fail fast” mentality. I joke that with AI, you need to kiss a lot of frogs to find your prince or princess. I tell my team that we have to be better at deciding which frog to pick up. Then, when we pick up a frog to kiss, we need to be efficient because not every frog is going to be royalty material. However, when you do find a promising solution, you really need to prepare the organization to embrace the royalty that is coming its way.
Ambient technology has been a good example. We’ve spent the past few years exploring the space, identifying promising vendors and then, through various iterations, incorporating them into our workflow.
Over the past year, we’ve even done a head-to-head trial comparing two leading vendors of ambient technology. We’ve learned a lot along the way and have found amazing results. Not only did the head-to-head trial help us identify which vendor may be better suited for us, it also helped create buzz in our organization so that when we were ready to pick one and deploy it widely, we already had a workforce ready to embrace it. We rolled out that technology to our 5,000 providers here at Duke in early January, and within two months, we already had more than 1,200 providers actively using it daily.
EXPLORE: Here are 13 ways AI enhances healthcare operations, patient care and treatments.
I think back to my 25-year career in informatics, and I don’t recall any technology that’s been spontaneously embraced by our clinicians this quickly. I will say that we also did a lot of prep work. In addition to that head-to-head trial, we were mindful when rolling out the technology to leverage our existing communication structures so that we had superusers we could lean on, many of whom were early adopters of this technology, so they are able to answer questions for their colleagues.
We were responsive to folks who wanted to start using this technology. We didn’t ask them to wait a long time. And for folks who had questions, we gave them the support and educational materials that they needed.
This has been very successful so far. I can say that even at this early stage, we are seeing favorable results. Last year, when we conducted the head-to-head trial, we got some amazing comments from clinicians almost from day one. We were hearing comments that it was a game changer. It was giving them a couple hours back a day. They didn’t have to spend the night sitting at the kitchen table finishing notes. That was great feedback.
Since the mass rollout, we are seeing early results that show clinicians are finishing their work earlier and closing notes faster. It’s been a rewarding experience, and that’s an important anchor point for us and for the rest of the industry to pay attention to. Not everything is going to work as well as ambient technology, but when you do find something, it’s important to prepare the organization to ensure that you can leverage the success quickly and fully.
HEALTHTECH: What foundational technologies, infrastructure or policies do healthcare organizations need to have in place to support AI initiatives?
POON: It’s important to think about whether you have the right decision-making structures in place. There are plenty of solutions, or frogs, hopping around. So, you need to make sure that you have the right folks in place who can find solutions that can help your organization meet its needs, try them out, and then hold the organization accountable for ensuring the technology is having its intended impact. They also need to be able to let go of those solutions that aren’t quite panning out. That’s something that a lot of organizations can do quickly if they are able to pull together the leaders and focus their limited energies and resources on finding and testing out the right solutions. That’s the one thing I would advise my colleagues to do.
Other foundational elements include having a workforce ready to embrace that technology. I think about our early journey with AI. When it first came out, yes, there was a lot of excitement, but we also made it a point to democratize that technology.
We were early adopters of Microsoft’s Bing Copilot Search, which was free to our organization. We spent some time to make sure that our colleagues of all stripes got an early start using the technology — with appropriate guardrails — so that folks could get comfortable with the tool. That was a small investment that we made early on that is beginning to pay dividends.
We did something similar with Microsoft Office Copilot, for which we bought 300 licenses. It was not free. We quantified the value by collecting data in a pilot to make sure there was some strong signal that the investment would yield benefits, and then opened it up to other leaders who wanted to purchase the tool for staff in their own departments. That cycle of accountability is something we are very proud of having built at Duke.
DISCOVER: Three ways Microsoft's Copilot in Windows can help productivity.
HEALTHTECH: Speaking of guardrails, what security controls need to be in place before jumping into AI use cases?
POON: When you’re dealing with healthcare, patient privacy is of utmost concern. We’ve done a lot of work to ensure that every time we implement a new technology, especially if it involves protected patient data, we have a multidisciplinary group thinking about appropriate use and how to pick the right partners.
When it came to Microsoft’s Bing Copilot search, a technical group met to consider whether the technology is secure enough for use with patient data. A different clinical group came together to consider whether clinicians should be using it to perform their clinical duties, which clinical groups would be allowed to use it and what guidelines need to be in place.
We used our governance process to draft a set of guidelines for generative AI in clinical use cases. We made some common-sense recommendations that inform clinicians that if they are using generative AI for clinical care, they should make sure it’s one of the vetted tools that our security experts have approved for their use. Then, when they use it, they need to assume full responsibility for the output. Any clinician who wants to use it needs to be at the appropriate clinical training level to review the output from the AI. So, in some ways, these are common-sense guidelines that have helped us advance the use of AI across thousands of clinicians quickly.