On stage for the HIMSS23 keynote discussion on Tuesday, April 18, 2023, in Chicago (from left): Lovelace AI CEO and Founder Andrew Moore; Peter Lee, Corporate Vice President of Research and Incubations at Microsoft; Kay Firth-Butterfield, Executive Director for the World Economic Forum’s Centre for Trustworthy Technology; author Reid Blackman, Founder and CEO of consultancy group Virtue; and Mayo Clinic CIO Christopher J. Ross.

Apr 18 2023
Management

HIMSS23: Healthcare Leaders Need to Take Control as AI Surges into a New Era

A HIMSS23 keynote discussion sheds light on the current state of artificial intelligence and machine learning in the industry, where to be cautious and what’s next.

Healthcare leaders are looking to make significant investments in artificial intelligence and machine learning solutions to better support their workforce and enhance care delivery.

That’s according to Philips’s Future Health Index 2023 for the U.S., which reports that 35 percent of healthcare respondents say they’re now investing in AI for integrating diagnostics, up from 17 percent last year.

With generative AI tools such as ChatGPT dominating recent headlines, the industry is keeping a close watch on how such solutions can be used now and in the next several years. But for these emerging technologies to best serve all communities, ethical and patient safety considerations need to be at the forefront.

This year’s HIMSS global conference and exhibition in Chicago explored these concerns, from April 17-21, in concert with the theme “Health That Connects and Tech That Cares.”

Tuesday’s opening keynote discussion, entitled “Responsible AI: Prioritizing Patient Safety, Privacy, and Ethical Considerations,” featured Mayo Clinic CIO Christopher J. Ross, who served as moderator, and four panelists: Lovelace AI CEO and founder Andrew Moore; Kay Firth-Butterfield, executive director for the World Economic Forum’s Centre for Trustworthy Technology; Peter Lee, corporate vice president of research and incubations at Microsoft; and author Reid Blackman, founder and CEO of consultancy group Virtue.

Click the banner to receive content beyond our HIMSS23 coverage by becoming an Insider.

After Centuries of Automated Imaginings, What’s Next?

Before the panelists began their discussion, Ross provided historical context about how people throughout history have imagined autonomous tools and mechanized ways of being. He reached back further than the 1950s, before modern usage of the term AI, to stories from antiquity, recalling the myth of Icarus and his assisted flight.

“While we dreamed about technology, we’ve also been worried about the consequences,” Ross said.

Just as Daedalus advised his son to take “the middle way” on his flight, to avoid plunging into the sea or getting scorched by the sun, should those interested in AI/ML also take heed? On this “long arc of human accomplishment and imagination,” what can people achieve next?

Ross said that rather than “big AI” solutions tackling grand tasks such as machines diagnosing diseases better than clinicians, “little AI” solutions are already listening, writing and helping with everyday tasks that are already changing how people live and work.

WATCH: CISA’s deputy director talks healthcare cybersecurity at HIMSS23.

At the start of the discussion, Moore said that the maturation of AI solutions at scale will happen among healthcare leaders working toward actual use cases, not the tech giants. He also stressed the importance of understanding the difference between AI development and deployment, and how the two cannot be conflated.

“Don’t wait for a small number of experts in Silicon Valley,” he said.

Lee added that the “healthcare community needs to assertively own decisions” related to AI. He highlighted the potential of generative AI to help patients interpret clinical decisions, improve clinical note taking and enhancing medical education and research.

Tensions Surrounding AI Concerns and Ethical Considerations

From a legal and ethical perspective, Firth-Butterfield highlighted issues of persistent bias and access. “How do we think about fairness, accountability? Who do you sue when something goes wrong? Is there somebody to sue?” she asked.

She also questioned the kind of data shared with generative AI systems, and brought attention to recent news of Samsung employees unintentionally leaking confidential information to ChatGPT. “That’s the sort of thing that you are going to have to be thinking about very carefully as we begin to use these systems,” she said.

Last month, Firth-Butterfield signed an open letter calling for a six-month pause on the development of AI systems “more powerful than GPT-4.” She decided to sign the letter because she said it was important to think deeply about this next, major step in AI development.

“What worries me is that we are hurtling into the future without actually taking a step back and designing it for ourselves,” Firth-Butterfield said.

DIVE DEEPER: Learn about Banner Health's unified data model journey. 

She stressed the importance in defining the problem and improving public understanding of AI. “What is it that we want from these tools for our future, and to make that really equitable?” she asked. “How do we design a future that enables everybody to access these tools? That’s why I signed the letter.”

Blackman raised questions about the black-box nature of AI models and characterized tools such as GPT-4 as “a word predictor, not a deliberator.”

“What’s the appropriate benchmark for safe deployment?” Blackman asked. “If you’re making a cancer diagnosis, I need to understand exactly the reasons why you’re giving me this diagnosis.”

Lee pushed against Blackman’s perspective, suggesting that the black-box issue might not exist at some point in future development, and that the “word predictor” description oversimplifies complex processes.

Ultimately, Blackman said, people should push for enterprisewide governance over AI, not to stop innovation but to establish a way to systematically assess the risks and opportunities on a use case basis. If not, things will fall between the cracks, he said, and possibly cause great harm.

“You need certain kinds of oversight. It can’t just be the data scientists,” he added. “It needs to be a cross-functional team. There are legal risks, ethical risks, risks to human rights, and if you don’t have the right experts involved in thinking about a particular use case in the context in which you want to deploy the AI, you’re going to miss things.”

EXPLORE: How are health IT leaders achieving digital transformation success?

Lee acknowledged that conversations about AI “touch a nerve in people.”

“There is something that is beyond technical or scientific or ethical or legal about this,” he said. “It’s a very emotional thing.”

Because of this, Lee said, it’s important for people to get hands-on understanding about AI, to learn about it firsthand and then work with the rest of the healthcare community to decide whether such solutions are appropriate.

Moore added that healthcare organizations should have their own teams that understand AI rather than rely solely on vendor knowledge and products.

Keep this page bookmarked for our ongoing coverage of HIMSS23. Follow us on Twitter at @HealthTechMag and join the conversation at #HIMSS23.

Courtesy of HIMSS/Lotus Eyes Photography
Close

Become an Insider

Unlock white papers, personalized recommendations and other premium content for an in-depth look at evolving IT