Part of the FDA’s action plan includes support for the development of machine learning best practices to evaluate and improve ML algorithms for topics such as data management, interpretability and documentation, as well as advancing real-world performance monitoring pilots.
The FDA also noted that the action plan would continue to evolve to stay current with developments in the field of AI/ML-based software as a medical device (SaMD).
As the agency pointed out in an April 2019 discussion paper, the potential power of AI/ML-based SaMD lies within its ability to continuously learn, where the adaptation or change to the algorithm is realized after the SaMD is distributed for use and has learned from real-world experience.
READ MORE: AI can increase efficiency in healthcare, even in a pandemic.
In turn, the autonomous and adaptive nature of these tools requires a new, total product lifecycle regulatory approach that supports a rapid cycle of product improvement, allowing SaMD to continually improve.
To address this, premarket submissions to the FDA for AI/ML-based SaMD would include a “predetermined change control plan,” which would describe the types of anticipated modifications that the AI/ML would generate.
By comparison, traditional software solves problems by being explicitly programmed by the development team. The team knows how to solve the problem, or consults an expert with domain knowledge, and creates the software algorithm accordingly, says Pat Baird, senior regulatory specialist and head of global software standards at Philips.
“However, for many types of AI applications, the development team doesn’t know how to solve the problem. Instead, they make a problem-solving engine that learns from data that is provided to it,” Baird says.
This opaqueness raises concerns for stakeholders, including users and patients, so building trust and being able to explain the data used to train the system and the quality processes that are in place will be key factors in the adoption of AI in healthcare.
RELATED: Find out why nurses are essential to AI integration in healthcare.
‘Responsible and Explainable AI Is Essential’
Medical AI already has a bias problem because it’s not always easy for researchers to obtain large, sufficiently varied data sets, which can then lead to those biases being baked into algorithms from the start.
“I think the first step in reducing bias is to raise awareness about different kinds of bias that can occur, remind people to challenge the assumptions that they have, share techniques on how to detect and manage bias, share examples and so on,” Baird says. “To improve machine learning, we need to be better at sharing our collective learning.”