Feb 10 2022
Data Analytics

Q&A: AI Helps Healthcare Organizations Reduce Avoidable Patient Harm

Jvion Chief Medical Officer Dr. John Frownfelter explains how artificial intelligence improves patient care and offers tips for implementation success.

As the healthcare industry shifts to more preventive, value-based care, AI provides guidance for early interventions that can prevent adverse outcomes, such as readmission or death.

A study published by the Mayo Clinic found that an AI-powered decision support tool helped reduce readmissions at a Wisconsin hospital by 25 percent. The tool uses AI to mine data on social determinants of health and clinical risk factors to predict which patients are at risk, then recommends evidence-based interventions.

The AI in the study, developed by Jvion, has been demonstrated to reduce avoidable patient harm events, ranging from sepsis to pressure injuries, by 20 to 30 percent on average.

Dr. John Frownfelter, Jvion’s chief medical officer, spoke with HealthTech about how AI can help providers prevent avoidable patient harm and where AI is delivering value today compared with expectations for 2030.

Click the banner below for access to exclusive HealthTech content and a customized experience.

HEALTHTECH: Can you give examples of use cases in which AI helped providers to reduce avoidable patient harm?

FROWNFELTER: One example is a diabetic who’s being admitted repeatedly to the hospital. How does a health system with all these processes in place help this patient who keeps getting admitted to the hospital? A hospital will have a good process in place to explore a dozen possibilities at once. That’s reactive medicine, and it’s good process at times, but it’s not precise for that patient.

Artificial intelligence in this context will help to identify the underlying drivers for that patient that might be less visible and bring them to the surface so they can be addressed to help the patient on their journey toward better health.

I’m referring to a specific case where we identified a diabetic patient as being at risk for depression when screened more carefully. At first, she wasn’t open to discussing it but after the provider gently dug a little deeper, they not only helped uncover some depression that could be treated but also some struggles with her daily living activities and taking her insulin. They were able to address those things and give her support. She was able to lose 20 pounds, got treated for depression and stopped being admitted to the hospital. It wasn’t magic; it was because we helped to identify that which was unseen and under-recognized. That’s the role AI has to play today in the clinical space.

HEALTHTECH: How can healthcare organizations make sure the data they’re including is unbiased and helpful?

FROWNFELTER: We have to assume all data has some trend to it, right? You might call a trend or a pattern a bias, in fact, so the solution to wash out any trends that aren’t appropriate or relevant is to have larger amounts of data. We call that representation. If I have a city with 2 million patients in it, and I have a model built on 100 patients, that’s not going to represent that city very well, is it? But if I have a model built on 500,000 patients randomly selected around the city, it will probably be a good representation of the rest of those 2 million patients.

Representation is a concept for overcoming inherent errors in small sample sizes or that don’t translate to another population. Jvion uses representation as a foundational principle. We have over 35 million patients in our data universe, and we cover 99 percent of the contiguous 48 states because of that sample size. You can’t represent the U.S. with 500,000 patients, but you can get a better representation with 35 million patients.

LEARN MORE: How does artificial intelligence improve patient outcomes?

The data sets being used create another bias risk. The data sets have to be representative of the population as well. So, if I’m trying to understand a population and I don’t have any socioeconomic data or have only a thin slice of that data, I’m not representing the risk drivers for those patients because we know socioeconomic factors drive between 70 and 80 percent of health outcomes.

If those aren’t considered, there will be under-representation of the real drivers, and the models will not only be flawed, but also will have some inherent bias in them as well. The bias will be skewed away from those who are at greatest risk for health disparities. It’s the biggest risk in terms of the bias it could introduce.

HEALTHTECH: What are some of the challenges to AI adoption or implementation?

FROWNFELTER: The easier it is to see the proof that something works or that it brings value, the easier the business decision. If I can see in cold, hard numbers that I’m going to save 50 percent of a radiologist’s time with digital-imaging AI support, that’s easy. But the closer you get to clinical practice, the harder it is to prove. One of the challenges is skepticism among clinicians. I’ve heard them say that even if it worked at another organization, their patients or population are different. They want to see it demonstrated in a double-blind, randomized, controlled study. Skepticism is ­­— because of the way we are trained — a scientific rigor, which is good. You don’t want physicians quickly changing how they practice randomly or capriciously.

That skepticism builds a wall of stability, but also inertia at times. To overcome that, you have to have early wins that are within the organization. So, a win in California doesn’t translate to a win in Iowa. You’ve got to show some success within the four walls of the organization, and then success grows and begets more success.

Additionally, there are organizations that still struggle with sharing data, and that’s a requirement for AI to work. You have to be able to pull data, aggregate it and send it. So, there are times when technology isn’t itself a barrier, but a combination of technology and skill sets might be. Staff might be focused on other priorities, but data is a very important piece.

HEALTHTECH: Do you have any tips for AI use or implementation success?

FROWNFELTER: You must have leadership in place to drive culture change. One of the things I see is that clinicians have to think differently. If medicine was completely intuitive, we wouldn’t need any additional support. We wouldn’t need AI. But in fact, it isn’t intuitive. Sometimes we’re wrong in what we think about a patient, and as a clinician, I have to accept that something I’m getting doesn’t make sense. I may be getting a new insight that’s different than what I thought, and it might be right. I have to think differently and take it as a lab test that helps me to think differently about the patient. That’s a big deal. Without that willingness to think differently about the patients, they’ll never act on those insights, and the patients won’t benefit.

Another key to success is on the technology side. The more we understand and insert the intelligence right into the clinician workflows, the better and easier it is for them to use it. In the end, we want clinicians to be more efficient and not have more work to do but have the right work to do. We do that through automation.

Dr. John Frownfelter
The more we understand and insert the intelligence right into the clinician workflows, the better and easier it is for them to use it.”

Dr. John Frownfelter Chief Medical Officer, Jvion

I’ll give you an example: Social workers, right now in some of our customer base, spend up to 50 percent of their time interviewing patients and collecting data on their social situation, on those drivers of risk. If we can provide those insights, and they can turn them into an action-oriented call or provide support to that patient instead of digging for information, it’s more fruitful, it’s better for the patient, and it’s better for the social worker or case manager.

Efficiency is a big deal. At the end of the day, we don’t want clinicians to be bogged down with more work but to be lifted with the right work.

HEALTHTECH: Where else is AI delivering value in healthcare today?

FROWNFELTER: There’s a fallacy that health system providers and payers are getting more and more access to social determinant data. The lowest level of maturity in terms of using that is to just take it at face value and say, “Well, this person lives in a food desert, so they must be nutritionally at risk.” That may or may not be true.

The next level would be using indexes to say that on one end of the scale of housing stability, if they’ve been in a house a long time, well, that’s good, and if they’ve been there less than six months, that’s bad. But that’s not true either. We know people can move for the right reason, and it can be a good thing that they moved.

To really understand what a risk driver is, is to understand multiple pressures that push on each other. If I’m an older person who just moved out of the house where I’ve lived for 30 years to go live closer to my grandkids and my children because I’m widowed now and they’re going to be helping me, my happiness is higher. I’m with my grandkids and I’m getting transportation to my doctor’s visits, which is far better than if I stay in that house alone, in an aging neighborhood, without friends and where I might have costly repair needs. An index doesn’t have any idea about that, but that level of insight is what we can provide.

EXPLORE: 5 ways AI and deep learning enhance patient care and hospital operations.

HEALTHTECH: What do you think it will take for AI to be used more widely by smaller organizations?

FROWNFELTER: It will take demonstrated success, such as literature establishing AI in clinical science. Then it will take leaders in the field, such as providers and payers, endorsing it and promoting it. When what we do is provide value, there must be a continued push or shift toward value-based care that rewards value. If I’m in a fee-for-service world, I’m reimbursed for activity. It doesn’t matter how good it is, it doesn’t matter what the outcome is, I’m paid for procedures or doctor’s visits. That’s a fee-for-service world.

The more we migrate to a fee-for-value world, where I’m reimbursed as a provider based on delivering the right care and better outcomes, with quality attached to it, then our AI is incredibly valuable because it helps to drive better quality outcomes and to identify risk in the unrecognized patient to help mitigate that risk.

HEALTHTECH: What will the AI landscape in healthcare look like 10 years into the future?

FROWNFELTER: One scenario is that it grows incrementally, but that kind of progress would be very disappointing. We need to get to transformation where it is pervasive. That’s the other scenario. As we allow more data to be shared among providers, and health systems can get a holistic view of their patient or patient population, then it’s only natural that they would apply AI to better understand their patients’ needs and risks.

As we transition to new delivery models like hospital at home, it will be even more important that AI is used to identify risk in those patients in an atypical setting. As atypical models of care progress, I think that will also pull us more into the AI space. I could easily see in 10 years that AI is so accepted and pervasive in a good way that clinicians depend on it for insights into diagnosis and treatment.

Thinkhubstudio/Getty Images
Close

Become an Insider

Unlock white papers, personalized recommendations and other premium content for an in-depth look at evolving IT