HEALTHTECH: Can you give examples of use cases in which AI helped providers to reduce avoidable patient harm?
FROWNFELTER: One example is a diabetic who’s being admitted repeatedly to the hospital. How does a health system with all these processes in place help this patient who keeps getting admitted to the hospital? A hospital will have a good process in place to explore a dozen possibilities at once. That’s reactive medicine, and it’s good process at times, but it’s not precise for that patient.
Artificial intelligence in this context will help to identify the underlying drivers for that patient that might be less visible and bring them to the surface so they can be addressed to help the patient on their journey toward better health.
I’m referring to a specific case where we identified a diabetic patient as being at risk for depression when screened more carefully. At first, she wasn’t open to discussing it but after the provider gently dug a little deeper, they not only helped uncover some depression that could be treated but also some struggles with her daily living activities and taking her insulin. They were able to address those things and give her support. She was able to lose 20 pounds, got treated for depression and stopped being admitted to the hospital. It wasn’t magic; it was because we helped to identify that which was unseen and under-recognized. That’s the role AI has to play today in the clinical space.
HEALTHTECH: How can healthcare organizations make sure the data they’re including is unbiased and helpful?
FROWNFELTER: We have to assume all data has some trend to it, right? You might call a trend or a pattern a bias, in fact, so the solution to wash out any trends that aren’t appropriate or relevant is to have larger amounts of data. We call that representation. If I have a city with 2 million patients in it, and I have a model built on 100 patients, that’s not going to represent that city very well, is it? But if I have a model built on 500,000 patients randomly selected around the city, it will probably be a good representation of the rest of those 2 million patients.
Representation is a concept for overcoming inherent errors in small sample sizes or that don’t translate to another population. Jvion uses representation as a foundational principle. We have over 35 million patients in our data universe, and we cover 99 percent of the contiguous 48 states because of that sample size. You can’t represent the U.S. with 500,000 patients, but you can get a better representation with 35 million patients.
LEARN MORE: How does artificial intelligence improve patient outcomes?
The data sets being used create another bias risk. The data sets have to be representative of the population as well. So, if I’m trying to understand a population and I don’t have any socioeconomic data or have only a thin slice of that data, I’m not representing the risk drivers for those patients because we know socioeconomic factors drive between 70 and 80 percent of health outcomes.
If those aren’t considered, there will be under-representation of the real drivers, and the models will not only be flawed, but also will have some inherent bias in them as well. The bias will be skewed away from those who are at greatest risk for health disparities. It’s the biggest risk in terms of the bias it could introduce.
HEALTHTECH: What are some of the challenges to AI adoption or implementation?
FROWNFELTER: The easier it is to see the proof that something works or that it brings value, the easier the business decision. If I can see in cold, hard numbers that I’m going to save 50 percent of a radiologist’s time with digital-imaging AI support, that’s easy. But the closer you get to clinical practice, the harder it is to prove. One of the challenges is skepticism among clinicians. I’ve heard them say that even if it worked at another organization, their patients or population are different. They want to see it demonstrated in a double-blind, randomized, controlled study. Skepticism is — because of the way we are trained — a scientific rigor, which is good. You don’t want physicians quickly changing how they practice randomly or capriciously.
That skepticism builds a wall of stability, but also inertia at times. To overcome that, you have to have early wins that are within the organization. So, a win in California doesn’t translate to a win in Iowa. You’ve got to show some success within the four walls of the organization, and then success grows and begets more success.
Additionally, there are organizations that still struggle with sharing data, and that’s a requirement for AI to work. You have to be able to pull data, aggregate it and send it. So, there are times when technology isn’t itself a barrier, but a combination of technology and skill sets might be. Staff might be focused on other priorities, but data is a very important piece.
HEALTHTECH: Do you have any tips for AI use or implementation success?
FROWNFELTER: You must have leadership in place to drive culture change. One of the things I see is that clinicians have to think differently. If medicine was completely intuitive, we wouldn’t need any additional support. We wouldn’t need AI. But in fact, it isn’t intuitive. Sometimes we’re wrong in what we think about a patient, and as a clinician, I have to accept that something I’m getting doesn’t make sense. I may be getting a new insight that’s different than what I thought, and it might be right. I have to think differently and take it as a lab test that helps me to think differently about the patient. That’s a big deal. Without that willingness to think differently about the patients, they’ll never act on those insights, and the patients won’t benefit.
Another key to success is on the technology side. The more we understand and insert the intelligence right into the clinician workflows, the better and easier it is for them to use it. In the end, we want clinicians to be more efficient and not have more work to do but have the right work to do. We do that through automation.