HEALTHTECH: What does data normalization mean to you, and how do you approach it in your work?
WANG: There are 26 ways of saying, “A1C test,” and they're all in different formats. Without clinical knowledge, it's very hard to get all of them. That means you're able to only capture an A1C test for half of the patients. They’re actually very well treated, but their numbers do not contribute to the final results. Normalization is converting all those variations into one. We call it the common data model. In that sense, the normalization is super important.
SCHWAMM: You have to understand how to handle diverse data formats. There's no pretraining on what you're likely to encounter. There are also inconsistencies that come from each of these different data sources. So, you have diversity of data formats, and then dealing with unstructured data such as text and images means that ensuring data privacy and security while maintaining data quality is incredibly important and challenging. You want to do that in a way that doesn't either generate the loss of important information or skew the results in a direction that is not consistent with a careful, human, manual review of the same data. I think people believe wrongly that an AI algorithm can reliably de-identify unstructured data. That's a very common misconception.
LIU: We know that it’s difficult to have static, normalized data for many different use cases. AI can facilitate that data normalization process with an AI-enabled data normalization framework. There are many standards to adopt, and if the standards are misaligned with the use case, then you also need to be agile in your process. It's critical to have that normalization framework with the AI-enabled capability. This will facilitate a much faster process for data use.
HEALTHTECH: To what extent does the responsibility for AI-driven clinical research workflows lie with clinicians versus IT leaders?
WANG: Clinicians definitely need to ingest their knowledge and experiences into the workflow. Their domain knowledge, particularly in clinical evaluation and also the ethical application, is also very important, as well as the interpretation of AI outputs in a whole workflow and in the context of patient care. IT leaders handle how to set up the infrastructure, data security, interoperability between different modules of the system and compliance. They also need to ensure scalable deployment, and that it works not only for a few physicians but for all physicians, and fits into the whole workflow seamlessly.
READ MORE: Take advantage of data and AI for better healthcare outcomes.
SCHWAMM: There is an important and unaddressed question about who owns the accountability for the responsible use of AI. I think we need a shared model of responsibility or liability that incorporates both traditional product liability concepts, from the vendor who developed the algorithm, to the IT leaders who determine how to deploy that algorithm, to the end users who are then expected to use it with good clinical practice principles in mind. I think everybody owns a piece of that shared responsibility. You don't hand a power tool to a toddler because you know they don't have the skill and experience to use it safely, even if it comes with all sorts of product warnings and instructions for use. We must make sure that our end users are properly trained and skilled in how to use these tools, but the vendors also have to take some responsibility for ensuring that their products get used in a manner that is aligned with their indications.
LIU: It should be a shared responsibility. Yes, clinicians need to define meaningful use cases, validate the results and ensure they are scientifically rigorous and reproducible. IT and informatics leaders are there to ensure data quality, reliability, compliance and model governance, because clinical data itself is used for clinical research. They generally have a privacy or regulatory component associated with it, so they cannot function alone. Organizations cannot treat AI as simply an IT solution. The world struggles with adoption impact, so co-ownership is necessary for trustworthy, efficient AI deployment for clinical research.
HEALTHTECH: What are some myths surrounding the business objectives for AI in healthcare?
WANG: With AI, people definitely think cost reduction will happen tomorrow, because a lot of things are automated. I think the ROI side of the story is not clearly set up for most of the tasks. A lot of companies work on applications, but we’re still not seeing the clear ROI because it's still relatively early. On the other hand, if we focus on a small, very defined task, we clearly see the cost reduction.
SCHWAMM: I think the big myth is that AI in healthcare is focused on improving health outcomes. The reality is that most of the AI that's deployed right now is focused on either cost containment, revenue growth or reducing provider burden. Very few of these algorithms are directly impacting patient care itself. The truth is that most of the applications deployed right now don't really touch patient care or clinical care directly. Most of them are back-office processes, coding support, making life a little easier on the providers. Those are the areas of lowest risk, so that's where most of the work has been focused.
