Close

New Workspace Modernization Research from CDW

See how IT leaders are tackling workspace modernization opportunities and challenges.

Nov 12 2025
Artificial Intelligence

CHIME25: How Healthcare IT Leaders Can Drive Human-Centered AI Adoption

Artificial intelligence can transform clinician workflows and improve patient outcomes, but its adoption requires intention and guardrails.

Value, transparency, security, scalability and reliability are some of the words that come to mind when healthcare IT leaders describe artificial intelligence success. While healthcare often falls behind other industries in technology adoption, it’s one of the top industries implementing AI tools. In fact, it’s one of the top three industries in the use of agentic AI, according to a recent McKinsey survey.

However, just adopting AI isn’t enough. It has to work. Clinicians want it to reduce documentation burden while improving patient outcomes. Leadership wants it to increase productivity, staff well-being and the bottom line.

At the 2025 CHIME Fall Forum in San Antonio, healthcare IT leaders shared current use cases, the keys to ensuring the success of AI projects, and how organizations can balance the pros and cons of AI while keeping patients and humans centered.

Click the banner below to read the new CDW Artificial Intelligence Research Report.

 

Proving the Value of AI in Healthcare

Many healthcare organizations are already seeing ROI for their AI tools. For example, the University of Kansas Health System wanted to mitigate documentation burnout, which became more prevalent during the pandemic. To do so, the organization implemented an ambient dictation tool. Chris Harper, CIO and senior associate vice chancellor of AI for the health system, said the first tool the team tried didn’t work as intended and as a result, it switched tools midproject.

After starting with a small group of 25 providers, the University of Kansas Health System saw savings of two hours per day per physician. The organization then scaled the solution and continued to measure outcomes to ensure the ambient dictation tool was still aligned with its expectations. With the time saved in their workflows, physicians could choose to see more patients or not, allowing them to reduce burnout and lower their cognitive load.

“It’s been a tremendous success for us, and we plan to roll out additional capabilities,” said Harper. “It’s important to focus on solving the right problem.”

Some healthcare organizations are using AI in the clinical space to improve patient outcomes. For example, San Joaquin General Hospital in California uses an AI algorithm that has helped improve care for stroke patients who are coming to the hospital in the extended window of six to eight hours. Previously, patients coming to the hospital during that window would likely receive only supportive care due to the decreased possibility of function being restored after that much time had passed. However, the algorithm can assess the percentage of tissue that is salvageable and has led to the hospital updating its extended window protocol, giving more patients access to function-restoring treatment. Joseph Izzo, chief medical information officer for the hospital, said the tool has created favorable outcomes for patients presenting with stroke in that extended window.

READ MORE: There is a critical need for organizational change in the AI age.

On the payer side, Michigan-based Corewell Health is using generative AI to label and sort prior authorizations. Nichole Niesen, director of automation, explained that the AI isn’t making decisions, but is augmenting the team members who are then able to take care of members faster. The initiative has resulted in $500,000 in redirected labor savings.

Other ways AI can provide value is by improving patient handoffs between departments. Izzo said that clinicians looking at the same note can have different opinions about how helpful the note is based on their specialty. San Joaquin General Hospital uses one AI scribe for the inpatient, outpatient and emergency departments. The ED physicians thought the notes were perfect while inpatient doctors found them to be overly verbose, leading to downstream patient issues.

Izzo pointed out that one of the problems was that the organization didn’t have representatives from each department at the table from the beginning. It was only leadership considering how the tool fit into existing workflows. As a result of bringing them to the table, physicians were able to come to an understanding on the tool and the issue of notes that had existed for decades. San Joaquin General Hospital shared their findings with the vendor, and they were receptive to those concerns, upgrading the AI to allow for customization and a better handoff.

Pallavi Ranade-Kharkar
At the center of healthcare is a human, a patient. Human-to-human interaction is the core of a positive patient experience. Keeping that in mind, we have to come up with guardrails to ensure the responsible use of AI.”

Pallavi Ranade-Kharkar Enterprise Director of Research Informatics and Genomics, Intermountain Health

Considerations for AI Implementation in Healthcare

Niesen’s advice to healthcare organizations considering AI adoption is to create an environment where it’s okay to fail fast and try again. Corewell Health is currently piloting its fourth service agent solution. Rather than rolling out a product that doesn’t work, the organization fails fast and keeps going to find a solution that enables clinicians to solve their IT issues quickly and on their own using agentic AI.

The University of Kansas Health System has been piloting Microsoft Copilot within its IT and informatics teams and would like to roll it out to the broader enterprise. However, Harper said, the organization recognizes that there isn’t enough trust.

“Technology is meant to be a learning system, so it’s not going to be perfect,” he explained, adding that the board expects perfection from AI tools. He pointed out that even humans aren’t perfect and that the industry needs to shift its mindset. He recommended peer-to-peer training with superusers to build trust and show staff how AI can make their lives easier.

Generative AI is where most healthcare organizations are focusing their attention, but interest in agentic AI is on the rise. However, Izzo pointed out that there are disparities. Large health systems can afford agentic AI tools, which can be unaffordable for smaller organizations. He sounded the alarm on the need to make agentic AI tools more accessible and affordable for organizations of all sizes.

Trust is another obstacle to agentic AI. Niesen explained that organizations are finding success with generative AI because it’s an environment in which people can interact with the data. It’s either accurate or someone will catch the error and fix it. AI agents act on their own as part of their nature, and Niesen recommended that organizations be careful if they allow agents to make decisions without human verification. She also agreed with Izzo on agentic AI’s affordability.

“We need to push back on the price tag,” she said.

Click the banner below to sign up for HealthTech’s weekly newsletter.

 

Keeping Humans Centered in the AI Conversation

During a session on redefining the human-AI partnership in healthcare, Matt Troup, solution principal for clinician documentation at Abridge, said that when he worked as a clinician, one of his biggest challenges was figuring out how to best use his time so he could meet both the needs of his patients and his family. AI can help clinicians transform their workloads and enable them to give both their patients and family the time they deserve.

Pallavi Ranade-Kharkar, enterprise director of research informatics and genomics at Intermountain Health, shared how her organization is giving clinicians time back to focus on patients by using AI to handle Epic in-basket messaging. The tool drafts a response based on the patient’s question or comment and backed by context in the patient’s medical record. The clinician can review that message before it’s sent.

“This intervention has simplified and streamlined workflows, and real-time savings of 20 to 30 seconds per message have occurred,” she said. “This is reducing ‘pajama time’ and helping clinicians to focus on the things they do best.”

When implementing AI, Russell Yeager, senior vice president and CIO at Encompass Health, emphasized that augmented intelligence and people need to work together. His organization focuses on real intelligence rather than AI.

“We take AI and combine it with our clinical and business intelligence to provide real intelligence to people involved in decision-making,” he said. Yeager pointed out that while AI can be autonomous when it comes to IT operations, it needs guardrails when used in patient care.

While AI brings tremendous potential to the healthcare industry, it also comes with risk. Ranade-Kharkar said that inaccuracies in data, a lack of transparency and increased security and privacy vulnerabilities are some of the biggest risks.

“At the center of healthcare is a human, a patient. Human-to-human interaction is the core of a positive patient experience. Keeping that in mind, we have to come up with guardrails to ensure the responsible use of AI,” she said, adding that identifying key performance indicators and measuring them is a big part of what AI success means.

Tracey Touma, cybersecurity business liaison at Cleveland Clinic, explained that AI governance is another important factor in AI success.

DIVE DEEPER: Data governance is a human challenge, not just a tech issue.

“We have to start there. People want that shiny new toy and to move forward quickly. AI is moving fast and furious, but we have to make sure that we have governance in place to implement AI securely and safely,” she said. “We’ve all heard about hallucinations and false positives. The data is only as good as what’s in the system. Make sure the data is good, secure and protected.”

Risk assessment and documentation should play key roles in AI governance, according to Ranade-Kharkar. Having an AI governance committee through which organizations evaluate internal and external products is the first step of AI adoption. Algorithm drift can occur and introduce bias, so it’s crucial that organizations are undergoing continuous, quality monitoring of how well their AI tools are performing. Organizations should also have an inventory of their tools, with transparency into how the vendor tested and validated algorithms originally, she added.

Intermountain Health recently migrated its electronic health record to Epic, and the organization is leaning on them to help implement and operationalize it, especially when it comes to the long list of AI tools available.

“We haven’t turned them all on yet. We’re doing it in an intentional and thoughtful way,” she said. “We’ll turn them on one by one as we’re sure we can monitor them and keep that level of quality.”

Touma emphasized the need for intention in healthcare AI adoption. Having stakeholders from across departments and business areas is important to ensure the organization isn’t missing anything. And the organization shouldn’t just buy a shiny new tool; the tool should have demonstrated cost savings or address concerns about patient experience, patient outcome or caregiving experience.

“At the end of the day we’re in business to take care of patients,” she added. “If AI can solve those three problems, then we need to know how, the impact, the cost and the business use case.”

Keep this page bookmarked for our coverage of the 2025 CHIME Fall Forum. Follow us on X at @HealthTechMag and join the conversation at #CHIME25.

FatCamera/Getty Images