Jul 13 2023
Data Analytics

How Does the AI Bill of Rights Impact Healthcare?

The White House’s Blueprint for an AI Bill of Rights provides guidance on ethical and health equity concerns in healthcare as the industry aims to reduce repetitive tasks and boost clinical efficiency with AI.

In October 2022, the White House released its Blueprint for an AI Bill of Rights, which outlined concerns about the use of artificial intelligence in various industries, including healthcare. It provides guidelines for how to address algorithmic discrimination and data privacy concerns. The five principles of the framework are safe and effective systems; algorithmic discrimination protections; data privacy; notice and explanation; and human alternatives, consideration, and fallback.

As the healthcare industry increasingly adopts AI, the blueprint could provide key guidance on how to use the technology. The Office of the National Coordinator for Health Information Technology has also suggested a “nutrition label” that describes how AI is used in the electronic health records it evaluates. Like the Blueprint for an AI Bill of Rights, the label would discuss the use of algorithms and recommend that users provide patients with information about how AI works.

The healthcare industry is exploring how to incorporate recent advances in generative AI, including ChatGPT, to reduce repetitive tasks, boost clinical efficiency and improve clinical decision-making. The industry aims to avoid issues with AI, including potential ethics and HIPAA violations. AI’s potential biases in algorithms and in training data sets could deepen care disparities.

“I think many of the things in the AI Bill of Rights are quite reasonable, and it brings the U.S. closer to a standard that’s set in other countries, like in Europe with the GDPR [General Data Protection Regulation], which contains many stipulations similar to those in the Blueprint for an AI Bill of Rights,” says James Zou, assistant professor of biomedical data science and faculty director of AI for health at Stanford University as well as an investigator for the Chan Zuckerberg Biohub Network and one of HealthTech’s IT influencers. “I think, overall, it’s a useful step.”

Click the banner below to learn how a modern data analytics program can optimize care.

The AI Bill of Rights Aims for Reliable Algorithms in Healthcare

The AI Bill of Rights blueprint calls for monitoring algorithms to address ethical and legal issues of AI in healthcare. The healthcare industry is examining how to audit and test complex algorithms.

“These are quite complicated models, so I think having some framework in place for rigorously evaluating and monitoring the performance of the algorithms can be broadly beneficial,” Zou says.

Software developers sometimes must go back and retrain models on additional data to improve the model’s performance, Zou explains.

He recommends that physicians evaluate models to ensure they work well for various demographic groups.

“A clinician’s expertise is useful in helping to generate labels and annotations for the data, which is then used to train or to evaluate the algorithms,” Zou says.

After testing the models across different subsets of patients, data scientists can then mitigate and improve the algorithms, Zou says.

“The threshold can be set to determine the trade-off between, for instance, how many false positives or false negatives the model may be reporting,” Zou explains. “It depends on the medical application. Sometimes false positives or false negatives can have different costs.”

DISCOVER: Tips for avoiding four common AI mistakes.

Chris Mermigas, head of legal at RSA Security, agrees that having a physician check the work of the AI tools is critical.

“Medicine is not an exact science,” Mermigas says. “That’s why you should always have a doctor or some healthcare professional that reviews conclusions made by AI.”

Mermigas compares the way a review process would look in healthcare with how a user double-checks the findings of red-light cameras to see if a ticket should be issued.

“There’s actually a computer program that logs all the cars that have gone through red lights, but then there is somebody that reviews all those results to decide if a ticket goes out or not,” Mermigas says.

The framework warns that automated systems should be designed to protect people from “unintended yet foreseeable uses or impacts of automated systems.”

An algorithm could lead to an incorrect diagnosis or the wrong treatment, Zou explains.

“I think the main concern is that the algorithms make a lot of mistakes, and that can hurt the healthcare outcomes of the patient,” he adds. “If the patient has a disease and the algorithm misses the diagnosis, that can certainly be a bad outcome for the patient, similar to how a human doctor may make a mistake.”

Maintaining Health Equity Using AI Tools

AI is known to bring biases that make health equity more difficult to maintain.

“You should not face discrimination by algorithms and systems should be used and designed in an equitable way,” the AI blueprint states.

However, Mermigas notes that the biases may lie more in the people programming the algorithms than in the algorithms themselves.

“An algorithm isn’t inherently discriminatory,” Mermigas says. “It’s the person who programs it who might practice either active or passive discrimination or have discriminatory tendencies. They program that into the algorithm, either knowingly or unknowingly.”

The digital bill of rights calls for designers and software developers to protect communities from algorithmic discrimination by incorporating accessibility for people with disabilities, carrying out disparity testing and making the results of this testing public.

Chris Mermigas
An algorithm isn’t inherently discriminatory. It’s the person who programs it who might practice either active or passive discrimination or have discriminatory tendencies.”

Chris Mermigas Head of Legal, RSA Security

Applying the AI Bill of Rights to HIPAA or HITECH

To apply the AI bill of rights to healthcare, Mermigas predicts that existing laws such as HIPAA and the Health Information Technology for Economic and Clinical Health would get updates.

Just as cloud vendors have business associate agreements under HIPAA and HITECH, companies that offer AI may also have to sign BAA contracts, Mermigas suggests.

“What HIPAA and HITECH do well is they put the legal requirement on the collector of the data,” Mermigas explains. The healthcare system is the collector storing and processing the data.

“They’ll probably end up putting the requirement on subcontractors and on the collectors, similar to how they do with data processing agreements under GDPR,” Mermigas says.

BAA contracts can be amended to be more about how the AI system interacts with the data rather than focusing on the human interaction.

“Instead of a human interaction, you can twist it to be a computer interaction, you interacting with the computer,” Mermigas says.

EXPLORE: Five steps to analytics and AI success.

The Benefits of AI in Healthcare

Zou notes that it’s important to weigh both the benefits as well as the risks of AI in healthcare. AI can help physicians, clinicians and administrators with diagnosing patients, scheduling appointments and handling insurance claims. It can also extract relevant patient information from an EHR.

“I think all relatively repetitive tasks can potentially be improved with the assistance of some AI algorithms,” Zou says. “There are a lot of repetitive things that could take a human physician minutes, half an hour or longer to do by hand that the algorithms can do instantaneously. If the algorithm performs well, that could save a lot of time.”

Zou describes the potential for algorithms to deliver better health outcomes for patients. For example, AI algorithms can save time for radiologists by taking X-rays and tracing out parts of an image, such as portions of a heart chamber. Zou’s studies have shown that AI can improve the models for assessing radiology images.

“If AI in healthcare can be deployed at a wide scale, there are really tangible benefits that can improve outcomes for many patients,” Zou says.

AI can be another data point doctors refer to when making decisions, according to Mermigas.

“For a doctor, that’s one other data point that they have to make their decision,” Mermigas says. “But there is no replacement for doctors and for healthcare professionals. No AI will ever replace a doctor.”

UP NEXT: Find out how AI and automation are supporting clinicians.

ipopba/Getty Images
Close

Become an Insider

Unlock white papers, personalized recommendations and other premium content for an in-depth look at evolving IT