The AI Bill of Rights Aims for Reliable Algorithms in Healthcare
The AI Bill of Rights blueprint calls for monitoring algorithms to address ethical and legal issues of AI in healthcare. The healthcare industry is examining how to audit and test complex algorithms.
“These are quite complicated models, so I think having some framework in place for rigorously evaluating and monitoring the performance of the algorithms can be broadly beneficial,” Zou says.
Software developers sometimes must go back and retrain models on additional data to improve the model’s performance, Zou explains.
He recommends that physicians evaluate models to ensure they work well for various demographic groups.
“A clinician’s expertise is useful in helping to generate labels and annotations for the data, which is then used to train or to evaluate the algorithms,” Zou says.
After testing the models across different subsets of patients, data scientists can then mitigate and improve the algorithms, Zou says.
“The threshold can be set to determine the trade-off between, for instance, how many false positives or false negatives the model may be reporting,” Zou explains. “It depends on the medical application. Sometimes false positives or false negatives can have different costs.”
DISCOVER: Tips for avoiding four common AI mistakes.
Chris Mermigas, head of legal at RSA Security, agrees that having a physician check the work of the AI tools is critical.
“Medicine is not an exact science,” Mermigas says. “That’s why you should always have a doctor or some healthcare professional that reviews conclusions made by AI.”
Mermigas compares the way a review process would look in healthcare with how a user double-checks the findings of red-light cameras to see if a ticket should be issued.
“There’s actually a computer program that logs all the cars that have gone through red lights, but then there is somebody that reviews all those results to decide if a ticket goes out or not,” Mermigas says.
The framework warns that automated systems should be designed to protect people from “unintended yet foreseeable uses or impacts of automated systems.”
An algorithm could lead to an incorrect diagnosis or the wrong treatment, Zou explains.
“I think the main concern is that the algorithms make a lot of mistakes, and that can hurt the healthcare outcomes of the patient,” he adds. “If the patient has a disease and the algorithm misses the diagnosis, that can certainly be a bad outcome for the patient, similar to how a human doctor may make a mistake.”
Maintaining Health Equity Using AI Tools
AI is known to bring biases that make health equity more difficult to maintain.
“You should not face discrimination by algorithms and systems should be used and designed in an equitable way,” the AI blueprint states.
However, Mermigas notes that the biases may lie more in the people programming the algorithms than in the algorithms themselves.
“An algorithm isn’t inherently discriminatory,” Mermigas says. “It’s the person who programs it who might practice either active or passive discrimination or have discriminatory tendencies. They program that into the algorithm, either knowingly or unknowingly.”
The digital bill of rights calls for designers and software developers to protect communities from algorithmic discrimination by incorporating accessibility for people with disabilities, carrying out disparity testing and making the results of this testing public.