Close

New Workspace Modernization Research from CDW

See how IT leaders are tackling workspace modernization opportunities and challenges.

Jan 28 2026
Artificial Intelligence

Tech Trends: Healthcare IT Leaders Get Real on the State of AI in 2026

Many healthcare leaders have already gained experience from several AI pilots. Here’s where they stand on the technology today.

Artificial intelligence isn’t a new concept in healthcare, but the speed with which adoption has taken off is unusual for an industry that is typically slow-moving. Many healthcare organizations have gone from assessing generative AI with hype and skepticism in equal measure in 2023 to having several pilots under their belts by the end of 2025. However, no matter the potential for time savings, cost savings, and improved patient and clinician experiences, drawbacks still exist and must be addressed as the industry moves forward with AI.

HealthTech surveyed seven healthcare IT leaders to get an understanding of where organizations are today on their AI journey. What’s paid off and what remains to be seen?

We spoke with Michael Archuleta, CIO at Mt. San Rafael Hospital in Trinidad, Colo.; Connie Barrera, corporate director and CISO at Jackson Health System in Miami; Dusanka Delovska-Trajkova, CIO at Ingleside in the Washington, D.C., metro area; Dr. Bimal Desai, vice president and chief health informatics officer at Children’s Hospital of Philadelphia; Brenton Hill, head of operations and general counsel at the Coalition for Health AI (CHAI); Dr. Eric Poon, chief health information officer at Duke Health in Durham, N.C.; and Jim Roeder, vice president of IT at Lakewood Health System in Staples, Minn., about their thoughts on AI today, successful use cases and how their approach to AI is changing as their experience grows.

DISCOVER: Here are the four AI tech trends to watch in 2026.

HEALTHTECH: Which AI use case has been most successful for your organization?

ARCHULETA: Our most successful AI use case has been AI-powered radiology detection, operationalized through our partnership with Radiology Partners. We’re using AI to help identify time-sensitive, life-threatening conditions such as intracranial hemorrhage, vessel occlusion, pulmonary embolism and cervical spine fractures earlier and more consistently. In rural healthcare, minutes matter, and this program is directly supporting faster escalation and safer patient outcomes. This isn’t theory, it’s real clinical impact happening right now.

BARRERA: Our most successful AI implementation has been intelligent automation of appointment rescheduling workflows for patients with complex, multi-order care plans. When patients need to reschedule, our AI system ensures all associated orders, referrals and care team notifications remain properly coordinated — eliminating the manual reconciliation that previously consumed significant physician and staff time while creating opportunities for clinical errors.

This has reduced rescheduling-related workflow by FTEs while freeing our care teams to focus on direct patient interaction rather than administrative coordination. The impact is that we have vastly reduced our no-show rates and made appointments available much quicker, increasing patient satisfaction and care outcomes.

DELOVSKA-TRAJKOVA: The most successful AI use case at Ingleside is the implementation of an AI-supported concierge chatbot. The idea behind this development was to improve the first impression by providing clear and consistently correct answers to residents or guests. This helps newer concierges who may not have been with the organization long enough to remember all answers, or managers on duty who help concierges on weekends and may not be as seasoned as the concierges on these topics.

Click the banner below to read the new CDW Artificial Intelligence Research Report.

 

DESAI: The clinical use cases with the most traction have been the ones that remove “pebbles” for clinical staff — tools like ambulatory note summaries. We’re about to launch the inpatient version of these, to provide AI summaries of hospital course, and — similarly — we are scaling our rollout of ambient scribes. It’s been great to witness how pleasantly surprised our providers are when they try these tools out.

HILL: At CHAI, our most impactful work has been supporting health systems in how they evaluate, govern and implement AI solutions. We see the greatest success when organizations use shared best practices for transparency and ongoing monitoring before scaling AI across clinical and administrative settings. This approach helps providers remove costly intake bottlenecks and go to pilots and adoption faster with greater confidence. Overall, strong governance has become an enabling use case for nearly every other application of AI across the healthcare ecosystem.

POON: Over the past 12 months, we have deployed ambient scribes from Abridge throughout our ambulatory environment, resulting in phenomenal uptake and feedback from our providers. We currently have 2,500 active users generating more than 30,000 notes each week. This significant usage has led to measurable impacts on burnout reduction, provider satisfaction, on-time chart closures and clinician productivity. Based on our success in the ambulatory setting, we have expanded the technology to our emergency departments and inpatient environments.

ROEDER: I would say at this point in our AI journey the most successful use case has been the ambient listening AI solution from Microsoft/Nuance called DAX Copilot. It has allowed us to sunset our scribe program and help our providers have more timely documentation within our EHR.

READ MORE: Understand the common AI features for EHR platforms.

HEALTHTECH: What about AI use in healthcare excites you most?

ARCHULETA: AI excites me because it gives healthcare something we’ve been chasing for decades: speed with precision. When AI is deployed correctly, it becomes a force multiplier for clinical teams by helping detect critical findings faster, prioritize what matters most, and reduce the risk of human delay in high-volume environments. For rural hospitals especially, AI is a care equalizer, as it helps ensure a patient’s outcome isn’t determined by geography. I’ve always believed your ZIP code should never determine your healthcare outcomes, and AI is one of the most powerful tools we have to make that statement real. To me, that’s the mission: Use innovation to deliver faster answers, earlier intervention and better outcomes.

BARRERA: The convergence of AI capabilities across cybersecurity and clinical workflows excites me most with the potential to build systems that simultaneously protect patient data while enhancing care delivery. We’re seeing opportunities for predictive risk modeling that can identify vulnerable biomedical devices before they’re exploited, detect anomalous access patterns that indicate both security threats and workflow inefficiencies, and provide real-time decision support during crisis events such as ransomware attacks or natural disasters. What’s particularly promising is AI’s potential to reduce alert fatigue by intelligently triaging and correlating signals across security, clinical and operational systems, allowing healthcare teams to focus on what truly requires human judgment and expertise.

DELOVSKA-TRAJKOVA: What excites me the most is that it has the potential to make sense of what matters and what can be ignored now that we are facing an influx of data in healthcare and wellness from all kinds of sources, such as watches, rings, smart scales, smart mirrors and sleep trackers. It would be wonderful if AI could help with personalized detection. That would go a long way with the desire for healthier and independent aging.

Michael Archuleta
AI excites me because it gives healthcare something we’ve been chasing for decades: speed with precision.”

Michael Archuleta CIO, Mt. San Rafael Hospital

DESAI: We have many examples of time-consuming, “paraclinical” work that is required by providers and nurses. This includes things like complex scheduling, prior authorization/pre-certification, portal messaging, data review/synthesis, and beyond. It’s clear to me that if we can roll out meaningful AI/automation to aid with these tasks, it will reduce the burden of paraclinical work, allow providers to spend more direct time with patients, improve well-being and reduce burnout. The clearest signal of this has been ambient scribes that, in national studies, have significantly reduced burnout. I’ve worked as a professional informaticist for over 20 years, and this is the first time I’ve seen a single digital intervention have that kind of impact. As these tools get smarter and more integrated into clinical workflows, I’m optimistic we’ll see more successes like these. The operational benefits (revenue cycle as a key example) are also significant.

HILL: What excites me most is the increasing alignment between clinicians, health systems and developers around responsible, real-world AI adoption. We’re moving beyond pilots and hype toward the practical use of tools that improve quality, reduce burden on clinicians, and expand access to care — especially in under-resourced settings like community health centers. AI has a real potential to meaningfully support care teams, but only if it’s implemented with trust, transparency and clinical expertise. Seeing that consensus form across the CHAI community and broader healthcare ecosystem is incredibly exciting.

POON: We are excited about the power of AI to transform every aspect of clinical care and operations. We are currently exploring the use of AI-assisted computer vision to help us prevent falls and pressure injuries in the inpatient setting. Our nursing staff are excited to leverage this technology to reimagine the care model in the hospital environment. We are also actively piloting agentic AI technology that we have developed internally to reduce the burden of detailed chart review for patient referral, discharge summary preparation and clinical registry data abstraction. Early results have been very promising. In the administrative space, we have seen successes in the revenue cycle area, where AI has demonstrated significant benefits in streamlining the labor-intensive tasks of prior authorization, chart reviews for documentation improvement and coding.

ROEDER: I’m excited about the opportunities it brings to help innovate and push things forward. For underserved healthcare providers it brings the opportunity of hopefully being able to provide high-quality service with reduced overhead and expenses. This would allow these places to continue to keep their doors open for their communities.

EXPLORE: Revolutionize prior authorizations with AI.

HEALTHTECH: What about AI use in healthcare still concerns you?

ARCHULETA: What still concerns me is governance, not the technology itself, and how it’s implemented, monitored, secured and trusted. AI must never become a black box that people blindly follow. It needs transparency, validation and clinical oversight to prevent bias and ensure accuracy. I’m also deeply focused on cybersecurity, because AI increases complexity and expands the attack surface in an industry that is already a prime target. The right approach is simple: AI must be held to the same standard as medicine — safe, accountable and continuously monitored.

BARRERA: My primary concern is maintaining regulatory-compliant audit trails when AI makes decisions affecting patient care or data access. In this realm, we need to demonstrate not just what the AI decided, but why, and in ways that satisfy HIPAA, Criminal Justice Information Services and other relevant requirements. Each AI system integrated can present a new attack surface, and I’m concerned about adversarial attacks on healthcare. These attacks include prompt injection vulnerabilities in clinical chatbots, and the risk of AI systems being manipulated to make harmful decisions during the critical window when security frameworks for healthcare AI are still maturing.

DELOVSKA-TRAJKOVA: What concerns me is that there are many AI-powered platforms, apps and services potentially working against each other. This creates a risk that nobody has warned us about and we may not be able to effectively recognize or monitor. Think of deepfake fears, which in healthcare creates a much riskier dimension.

Dr. Eric Poon
There is still a lot of work to be done to translate the promise of generative and agentic AI into measurable benefits at the bedside and healthcare operations.”

Dr. Eric Poon Chief Health Information Officer, Duke Health

DESAI: I have three major concerns: deskilling, automation bias and the environmental impact. I have concerns about deskilling — that providers who use AI avidly may become dependent on them. But history also suggests that many of the concerns about deskilling are unfounded. Nobody laments that we’ve been deskilled in the use of slide rules or deskilled in the ability to calculate which vaccines are due for a child or deskilled in our memorization of a complex order set. These are all functions we’ve gladly abdicated to the computer. The related risk is that we’re even more vulnerable when the computer isn’t available (for example, during a cyberattack).

I also have concerns about automation bias, which I think of as related to but separate from deskilling. The risk is that people won’t know to question the output of AI — they stop double-checking. And at least today, with the non-zero risk of hallucination or “garbage in/garbage out” AI summaries, I worry that errors will go unchecked and be perpetuated throughout the system.

Finally, I have significant concerns about the environmental impact of AI, power consumption of data centers and strain on the “grid.” I think health systems should be aware of this impact and work to develop guidelines for the “appropriate” use of AI. For example, favoring local models where appropriate/sufficient, depending on the scenario. As tech companies develop strategies to address power consumption and environmental impact, health systems should favor vendor partners who are showing a true commitment to green solutions. To me, that’s the one area it feels like we as health systems have the least control of today, other than simply choosing to not use AI.

LEARN MORE: Apply these AI data governance strategies for success.

HILL: My biggest concern is organizations moving faster than sufficient validation, monitoring, governance or clinician engagement can keep pace. AI deployed without clear governance can introduce bias and safety risks that undermine trust. Addressing those gaps is essential if we want AI adoption to be secure and trusted.

POON: There is still a lot of work to be done to translate the promise of generative and agentic AI into measurable benefits at the bedside and healthcare operations. We know that not all solutions will be the right fit for us, so healthcare leaders need to learn about the promise and limitations of AI, so that we can focus our limited energies on the most promising solutions that address existing pain points. In addition, many of the latest AI tools still work like black boxes, so we still need to mature our ability to monitor their performance over time to ensure safety and effectiveness.

ROEDER: There still is the concern about costs to implement, bias within the large language models, governance within the company utilizing the AI and overall trust from all parties involved.

Click the banner below to sign up for HealthTech’s weekly newsletter.

 

HEALTHTECH: How has your approach to AI changed?

ARCHULETA: My approach has matured from “exploring AI” to engineering outcomes. In the early days, the conversation was about innovation and possibility. Now it’s about workflow integration, measurable value, safety and sustainability. We’ve moved toward disciplined execution, governance frameworks, success metrics, clinical escalation pathways and operational accountability. AI isn’t a side project anymore, it’s becoming part of how modern healthcare operates, and leaders have to treat it like an enterprise clinical strategy, not a gadget.

BARRERA: We’ve fundamentally shifted from running isolated AI pilots to establishing comprehensive AI governance frameworks that treat AI systems as critical infrastructure requiring the same rigor as other systems and platforms. This means every AI proposal undergoes formal vendor risk assessment, data governance review, security architecture evaluation and regulatory compliance validation before touching patient data or clinical workflows. Rather than “AI replacement” thinking, we design for human-AI collaboration with documented workflows that keep humans in the loop for critical decisions. We’re focused on metrics that verify whether AI actually improves outcomes rather than creating new inefficiencies. We are also working on our own large language model and we’re very excited about this initiative to reap the ultimate benefit from our own data.

DELOVSKA-TRAJKOVA: AI is confusing for many. People expect a magic wand, but what they get is a multipurpose toolkit that still requires structure and accountable leadership. In many cases, leadership is opening the toolbox before asking the right questions. Another point is that AI means little without real resources behind it, meaning money and people. Finally, AI can support solutions; humans must own them, which in my experience explains why AI is so hard. It’s not that we don’t have technology. It’s a culture change that takes time.

CONSIDER: Why does culture, not code, determine AI success?

DESAI: We understand now that our vendor-released AI tools require extensive clinical validation, both for clinical accuracy as well as for any “special considerations” relevant to pediatrics. As an example, for our ambulatory AI summaries, up to 200 providers spent nearly a year validating the tool before we felt comfortable turning it on for all users. Given the huge number of AI use cases our EHR vendor is developing, we need to think differently about how we resource the teams responsible for testing, validation, training and deployment of these tools. Having a dedicated AI evaluation core, comprised of data and analytics, informatics, researchers and clinical stakeholders, as well as a strong approach to AI governance, including security, privacy, ethics, safety and legal, will be critical to our ability to leverage these tools within our health system while minimizing risk.

HILL: As an industry-led coalition of more than 3,000 members, our approach continues to evolve as we listen closely to health systems, clinicians, startups, developers and the broader healthcare ecosystem. Because health AI is advancing so rapidly, we constantly adapt based on real-world implementation experience, emerging technical capabilities, and ever-evolving needs across different care settings. That feedback from our members and community helps ensure our guidance remains practical, relevant and grounded in what’s actually happening in healthcare.

Dusanka Delovska-Trajkova
AI is confusing for many. People expect a magic wand, but what they get is a multipurpose toolkit that still requires structure and accountable leadership.”

Dusanka Delovska-Trajkova CIO, Ingleside

POON: With AI entering the mainstream for so many aspects in healthcare, we have emphasized agility in our AI selection and evaluation approaches. We have developed approaches to conduct rapid evaluations of AI solutions in low-cost and low-risk ways so that we can more rapidly identify those that might bring value to our patients and clinicians. However, for solutions that we test, we still want to make sure our clinicians or patients find them helpful and that they improve care or outcomes in measurable ways.

We have also become very interested in partnering with select AI solution vendors to extend solutions of high value to adjacent use cases through co-development so that both our partners and our health system can remain at the forefront of using AI to address today’s healthcare challenges.

ROEDER: I think looking back at the last year we have been much more open to adopting AI technologies or piloting them to see what value they could bring us. There are new AI tools being developed and released within our EHR platform that we are testing and piloting on a quarterly basis. We also have made sure it remains one of our system goals to continually look at the adoption and implementation of AI technologies where the fit is right.

PonyWang/Getty Images