Jul 08 2022

What to Know: Key Concerns for Adoption of Medical Devices Using AI

What should providers consider when adopting medical devices that rely on artificial intelligence and machine learning?

Artificial intelligence and machine learning are revolutionizing every sector of the market, but the impact is nowhere more striking than in healthcare. The rapid adoption of medical devices that rely on AI/ML can learn from Big Data and data generated by individual patients, as well as vast amounts of global information.

New insights unlocked by AI/ML are expected to help predict, diagnose and manage patient health. Two examples include more accurate diagnostics assisted by computer vision in radiology, and improved procedures with robotics-assisted surgery.

Evidence for growth is overwhelming: The worldwide market for AI in healthcare was $6 billion in 2021 and is projected to grow to $64 billion by 2027, representing approximately 10 percent of the overall market size. The U.S. Food and Drug Administration approved only five such devices in 2015, but that number jumped to 100 by 2020. To date, nearly 350 AI-based devices have been approved in the U.S., many of them for use in radiology.

Medical devices that rely on AI/ML today play the role of clinical decision support systems. To move beyond that realm, numerous issues need to be addressed, such as the need to continuously monitor, identify and manage risks associated with such devices, and the consideration of security, privacy, regulatory and ethical implications.

Click the banner below for access to exclusive HealthTech content and a customized experience.

Guidance for Safe and Effective Medical Devices

In 2021, the FDA, together with Health Canada and the U.K.’s Medicines and Healthcare products Regulatory Agency, released 10 guiding principles to encourage the development of “Good Machine Learning Practice.” The intent is to promote “safe, effective and high-quality medical devices” that use AI and ML. 

This is welcome guidance for healthcare systems that now rely on AI/ML medical devices, as well as for digital health startups and medical technology enterprises that want to develop their own devices. For them, and for others interested in entering this field, a deeper look into the FDA-provided guiding principles could help jump-start or further enrich the journey.

The 10 FDA guidelines cover a variety of issues, but many of them revolve around the model itself, the requirements related to cybersecurity and risk reduction, and the need to involve multiple individuals and disciplines in the development and maintenance of medical devices that rely on AI and ML.

DISCOVER: 5 steps to secure Internet of Medical Things devices.

Start with the Model for Medical Devices Supported by AI

AI/ML medical devices have, as their foundation, a model — a program or algorithm trained through exposure to copious amounts of data. The model makes predictions based on the data. If correct, the model learns and is strengthened; if incorrect, it adjusts to increase its accuracy. This learning process takes time, requiring millions or billions of permutations and adjustments. The more the model is trained, the better it gets at making correct diagnoses and reducing the number of incorrect ones.

FDA guidance helps ensure that the algorithms selected for the model are those most suitable for the characteristics of the data, and that parameters are tweaked to produce the intended results.

In addition, the FDA recommends that the data collected for the model be representative of the intended patient population, that the model references clinically relevant data and that samples are large enough — and of sufficient quality — to allow experts to gain insight into the data. All of this calls for active participation of various stakeholders and experts who can ensure that the model is sufficiently robust and useful.

Multidisciplinary Expertise: Because medical devices have a wide variety of users and targets, it is important for the development team to call on a variety of stakeholders with relevant expertise. These experts can help develop a full understanding of how the device will be integrated into the clinical workflow and highlight potential issues at an early stage of the design process, where changes are less costly. Equally important is a full understanding of any related patient risks, to ensure that the intelligent medical devices being built are safe and effective. Without the expertise of a multidisciplinary team, developers are likely to miss or misunderstand some of the desired benefits and potential risks.

Cybersecurity: The FDA guidance stresses the importance of implementing robust cybersecurity practices. It advises attention to “fundamentals,” including basic software engineering practices, robust data management practices and strong attention to cybersecurity throughout the design and development process. Testing should demonstrate device performance during clinically relevant conditions, which calls for statistically sound test plans in addition to ensuring that data quality is built-in and tested. The model design should also support the active mitigation of known risks. Data authenticity and integrity must be ensured, not only during the design process, but also when devices are deployed. Real-world monitoring can improve safety and performance and reduce bias and risk.

RELATED: Find how how AI helps healthcare organizations reduce avoidable patient harm.

The Implementation of AI in Healthcare Is a Worthwhile Journey

Implementing AI/ML in the healthcare sector is not easy. It requires a significant investment and calls for collaboration among a wide number of stakeholders. All members of the multidisciplinary team must gain an understanding of the model, the outputs and their potential implications. In addition, as the model learns, it will necessarily change, which may require further training of both healthcare workers and the patients who use the devices. But the result  — improved patient monitoring and clinical outcomes — will be worth the effort.

John Hersey/Theispot (illustration); pop_jop (icons)

Zero Trust–Ready?

Answer 3 questions on how your organization is implementing zero trust.