Close

New Workspace Modernization Research from CDW

See how IT leaders are tackling workspace modernization opportunities and challenges.

Mar 02 2026
Artificial Intelligence

Q&A: Clearing Up Some Healthcare AI Misunderstandings

Three academic perspectives offer insights on the persistent misconceptions about artificial intelligence in healthcare.

There seems to be a wide range of solutions in healthcare touting artificial intelligence–powered features, from smart assistants to communication platforms. But whether these solutions are showing tangible business results remains up in the air.

Nearly four years since the public launch of ChatGPT, there are still misconceptions about AI and the processes of data management connected to making such solutions work in healthcare. HealthTech reached out to three industry AI experts to assess the gap between expectations and reality: Hongfang Liu, professor and vice president of learning health system of the University of Texas Health Science Center at Houston (UTHealth Houston); Dr. Lee Schwamm, senior vice president and chief digital health officer at Yale New Haven Health System and associate dean for digital strategy and transformation and professor of neurology and biomedical informatics and data sciences at Yale School of Medicine; and Xiaoyan Wang, research professor in health policy and management at Tulane University.

DISCOVER: These are the four AI tech trends to watch in 2026.

HEALTHTECH: What are some misconceptions about AI use in healthcare?

WANG: A lot of people think AI is so powerful that it can automate and perfectly normalize different formats from all the different sources, and without much human oversight. No — totally not there yet. It remains the most challenging problem in healthcare on the data end. On the other hand, natural language processing itself, or large models, can accelerate or reduce the time we spend in normalization, but it is not really automated yet. 

SCHWAMM: The first one is that AI is smarter than doctors and will replace them. That's just not an accurate understanding of where AI will contribute value in healthcare. I think the second one is that hallucinations are a sign that AI models are broken or not working properly. The reality is that hallucinations are just another description for when AI prediction models don't get it quite right. They're just prediction models. Hallucinations are probably the wrong term; I would say inaccurate predictions. The third common misconception, I think, is that patients can use chatbots by themselves to get solid medical advice, and that is also a misunderstanding of what chatbots do. Chatbots will outperform physicians in certain circumstances, but they don't have the ability to absorb context that is not directly provided to them, so they require context to be curated and provided.

LIU: The biggest misconception is that AI will eventually replace clinicians — this all comes from talk that radiologists will be replaced, those types of things. I think it’s a misconception because AI actually depends on humans for data generation and interpretation, and AI alone cannot function as an agent independently, due to the legal liability. We're very far away from agent-based decision-making, for many reasons. In reality, in healthcare, AI can augment clinical expertise and reduce some of the repetitive tasks, but the judgment and contextual understanding for healthcare delivery still need a human agent, not an AI agent.

Click the banner below to read the new CDW Artificial Intelligence Research Report.

 

HEALTHTECH: What does data normalization mean to you, and how do you approach it in your work?

WANG: There are 26 ways of saying, “A1C test,” and they're all in different formats. Without clinical knowledge, it's very hard to get all of them. That means you're able to only capture an A1C test for half of the patients. They’re actually very well treated, but their numbers do not contribute to the final results. Normalization is converting all those variations into one. We call it the common data model. In that sense, the normalization is super important.

SCHWAMM: You have to understand how to handle diverse data formats. There's no pretraining on what you're likely to encounter. There are also inconsistencies that come from each of these different data sources. So, you have diversity of data formats, and then dealing with unstructured data such as text and images means that ensuring data privacy and security while maintaining data quality is incredibly important and challenging. You want to do that in a way that doesn't either generate the loss of important information or skew the results in a direction that is not consistent with a careful, human, manual review of the same data. I think people believe wrongly that an AI algorithm can reliably de-identify unstructured data. That's a very common misconception.

LIU: We know that it’s difficult to have static, normalized data for many different use cases. AI can facilitate that data normalization process with an AI-enabled data normalization framework. There are many standards to adopt, and if the standards are misaligned with the use case, then you also need to be agile in your process. It's critical to have that normalization framework with the AI-enabled capability. This will facilitate a much faster process for data use.

HEALTHTECH: To what extent does the responsibility for AI-driven clinical research workflows lie with clinicians versus IT leaders?

WANG: Clinicians definitely need to ingest their knowledge and experiences into the workflow. Their domain knowledge, particularly in clinical evaluation and also the ethical application, is also very important, as well as the interpretation of AI outputs in a whole workflow and in the context of patient care. IT leaders handle how to set up the infrastructure, data security, interoperability between different modules of the system and compliance. They also need to ensure scalable deployment, and that it works not only for a few physicians but for all physicians, and fits into the whole workflow seamlessly.

READ MORE: Take advantage of data and AI for better healthcare outcomes.

SCHWAMM: There is an important and unaddressed question about who owns the accountability for the responsible use of AI. I think we need a shared model of responsibility or liability that incorporates both traditional product liability concepts, from the vendor who developed the algorithm, to the IT leaders who determine how to deploy that algorithm, to the end users who are then expected to use it with good clinical practice principles in mind. I think everybody owns a piece of that shared responsibility. You don't hand a power tool to a toddler because you know they don't have the skill and experience to use it safely, even if it comes with all sorts of product warnings and instructions for use. We must make sure that our end users are properly trained and skilled in how to use these tools, but the vendors also have to take some responsibility for ensuring that their products get used in a manner that is aligned with their indications.

LIU: It should be a shared responsibility. Yes, clinicians need to define meaningful use cases, validate the results and ensure they are scientifically rigorous and reproducible. IT and informatics leaders are there to ensure data quality, reliability, compliance and model governance, because clinical data itself is used for clinical research. They generally have a privacy or regulatory component associated with it, so they cannot function alone. Organizations cannot treat AI as simply an IT solution. The world struggles with adoption impact, so co-ownership is necessary for trustworthy, efficient AI deployment for clinical research.

HEALTHTECH: What are some myths surrounding the business objectives for AI in healthcare?

WANG: With AI, people definitely think cost reduction will happen tomorrow, because a lot of things are automated. I think the ROI side of the story is not clearly set up for most of the tasks. A lot of companies work on applications, but we’re still not seeing the clear ROI because it's still relatively early. On the other hand, if we focus on a small, very defined task, we clearly see the cost reduction.

SCHWAMM: I think the big myth is that AI in healthcare is focused on improving health outcomes. The reality is that most of the AI that's deployed right now is focused on either cost containment, revenue growth or reducing provider burden. Very few of these algorithms are directly impacting patient care itself. The truth is that most of the applications deployed right now don't really touch patient care or clinical care directly. Most of them are back-office processes, coding support, making life a little easier on the providers. Those are the areas of lowest risk, so that's where most of the work has been focused.

LIU: I think the biggest myth in business objectives is about cost savings. We know it can become efficient, but AI's benefit will come in improving care quality, reducing clinical burden, improving diagnosis and supporting better population health insights. AI is not going to save money for the healthcare system, but it is a way to make sure the consequence of digital transformation is to give you a lot of data. We need the technology to support us with data-driven insights and an automated workflow to assist with the digital overload issues we face.

HEALTHTECH: How can academic researchers, health systems and tech leaders work together to implement AI in healthcare?

WANG: I think the clinicians need to really tell IT leaders, “Here are the pain points I spend hours of my time on every day, and I think it's very repetitive. Can you automate that?” I think IT leaders need to really ingest knowledge coming from physicians, and they need to understand very clearly the scenario and the workflow. There is a very interesting report from MIT that basically says 95% of AI pilots are failing, because those things are not really working together. IT just works on technologies, and physicians just work on the domains. They don't really integrate all those things together.

SCHWAMM: Shared governance models within academic health systems that are transparent but meaningful are a vital ingredient. But I also think technology leaders — and here, I mean also outside of health systems; Silicon Valley and other tech startups — need to step forward and commit to more rigorous training methods for their algorithms and also join in the responsibility for postdeployment monitoring that right now falls exclusively on health systems. Given the cost, complexity and ever-changing nature of these algorithms, placing that burden solely on health systems as deployers of AI is nonsustainable.

LIU: Each group can contribute its strengths. Academic researchers can bring methods and evaluation expertise. Health systems can have a top-down strategy, bring the clinical context to the AI implementation and support workflow redesign. Technology leaders need to deliver engineering, scalability and the deployment capabilities. AI deployment to us is not a technology problem. Technology is actually the easiest in this whole process. To really deliver value, the partnership needs to start early in defining and formulating an AI-enabled workflow. That needs to be co-designed. Otherwise, you will face a lot of issues.

RELATED: How do smart hospitals push forward from pilot to practice?

HEALTHTECH: How do you keep healthcare staff motivated to adopt AI?

WANG: Many clinicians still don't trust AI, which is why they don’t want to use it. Engage them at the beginning of the project, and also understand their needs and ingest their domain knowledge into the system, and have them evaluate the system. By evaluation, they can see, “This does have the same judgment as I do in my daily work, then I can use it.” So, I think trust is the No. 1 thing in the whole AI space, because a lot of users do not have it. Engage them at the beginning, and showcase the outputs so they will really be motivated to work with you and to use it after deployment.

SCHWAMM: Primarily, and just as with anything else, “what's in it for me” is always important to think about when considering adoption. I think we have to focus on experiential learning with this technology: Show them how it can help them in their day-to-day work, how it can radically transform how they do their work to create wholly new ways of doing things, and let the benefits of implementing AI accrue in part to the people who are using the AI, not just to the bottom line of the corporation.

LIU: They need to see that AI can directly generate data sets and help them to do things they couldn't do before. To generate data sets that use a traditional machine learning–related approach, you first need to generate labeled data sets. When the labeled data sets cost a lot of money and require manual effort, we usually opt to do much smaller labeled data sets, but we use AI to generate a weak label, with a human to do verification. This helps them dramatically; when people see it, they have confidence in the technology and the trust around it. When you have the trust, then with the evidence supporting it, there will be no problem in adoption.

Click the banner below to sign up for HealthTech’s weekly newsletter.

 

HEALTHTECH: What type of skills do clinicians need to better implement AI?

SCHWAMM: They need to develop some prompt-engineering skills. They need to understand what an AI model does and then how you can fine-tune the performance of that model through prompting to get the output that you desire. They need basic AI literacy, but also the space to be creative, armed with synthetic data and digital sandbox or playgrounds where they can experiment. That, to me, is the recipe for adoption and transformation for the clinical application of AI.

LIU: I believe clinicians need some foundational data literacy and a basic understanding of AI model mechanisms, as well as the ability to interpret results. But I don't think they need to become data scientists, AI scientists or AI engineers to do AI deployment. They need to have enough knowledge in those topic areas, but don’t spend the precious expertise on patient care to do AI implementation.

EXPLORE: Overcome AI implementation hurdles in healthcare.

HEALTHTECH: In reality, how does AI translate from research to effective business processes in healthcare?

WANG: I think the first thing is to really understand what's needed, what are the pain points, then build the proof-of-concept research to really solve those pain points by automating the process. And that is the first thing — the development of point-of-care algorithms that come from real clinical needs. Then, you move to the second thing, which is the pilots in clinical settings. The third thing is to scale it.

SCHWAMM: This is a process with several steps. It begins with rigorous evaluations pre-deployment, selecting tools with proven value followed by confirmation of that performance when it's tested inside your organization. That validation might be through prospective or retrospective, silent evaluation of processes that require simple and convenient workflow integration rather than complex and burdensome workflow changes. You can't ask people to change the way they do everything to get the value from AI. The AI must adapt to existing clinical or business workflows, and be harmonized with existing technology platforms as opposed to some separate process.

LIU: I actively researched how to make sure AI innovations address real-world problems faced in healthcare by using three words: hope, trust and science. The hope reflects people addressing problems in the real world. We're not trying to create a problem to solve, we’re addressing a problem that’s already there. Physician burnout is a true problem, and AI scribing is a solution that will get wide adoption because that's the real-world problem physicians face. Trust can be easily established because you are solving that problem. You are not trying to replace the clinicians but augmenting a clinician to solve one of the problems that they face related to clinical documentation. The science is basically trying to understand and gather data on the outcome associated with the deployment. This hope-trust-science framework can move your AI from a technology experiment to a reliable business capability.

Greg Mably/Theispot