Here are four ways the healthcare industry and regulators can support growing interest in AI solutions:
1. Government Needs an Innovation-First, Risk-Based Approach
Venture capital funding for the top 50 firms in healthcare-related AI reached $8.5 billion in 2020, and big tech firms, startups, pharmaceutical and medical device companies, and health insurers are all enthusiastic about working to embed AI in the healthcare ecosystem. AI is already producing vital benefits: personalizing patient-provider interactions, automating administrative processes, improving predictive capabilities for disease detection and prevention programs, and inevitably leading to higher returns.
As proposed by the Biden administration, a combination of federal funding, appropriate risk-based approaches to leveraging AI and privacy-protective technology, and support for more research can help industry unlock AI’s potential while minimizing data privacy concerns and harms.
READ MORE: How can healthcare balance the reward and risk of AI?
2. Consumer Trust Needs to Be Strengthened to Use AI
The Pew Research Center published a report this year analyzing Americans’ views on the impacts of their providers relying on AI. The survey found that less than half of U.S. adults surveyed believe that AI in health and medicine would improve patient outcomes; 60 percent say they would feel uncomfortable if their healthcare provider relied on AI to do things such as diagnose disease and recommend treatments. However, a larger share of Americans believes the use of AI would reduce the number of provider-driven errors, and more than half feel the use of AI could reduce biases and inequitable treatment.
Building consumer trust in AI will require internal and external commitment by companies. For example, healthcare organizations should:
- Provide clear, plain-English disclosures on the use of AI (and its underlying training data), as well as how AI can remove bias and improve accuracy in decision-making
- Bake in options to opt out of AI and include data access rights so consumers feel empowered about decisions made both with them and about them
- Engage independent entities to check their practices and publicly acknowledge that they are being held accountable
3. Regulators Continue to Pay Attention to Lawbreakers
In line with consumer complaints and heightened patient angst about new tech, regulatory bodies are paying close attention to the use of AI, particularly the harm it could cause and how laws can address that harm. Earlier this year, four federal agencies signed on to a joint statement that they will more closely scrutinize discriminatory uses of and bias in AI practices. Similar warnings are being sounded by the U.S. Department of Health and Human Services, indicating heightened scrutiny of the use of pixel tech, biometric data and other facets of AI-dependent healthcare websites and apps. The Federal Trade Commission’s recent enforcement actions against healthcare organizations likewise signify close attention to this space.
EXPLORE: AI-powered solutions are making healthcare smarter.
4. Accountability Solidifies Consumer Trust in AI
While experimenting with AI, healthcare organizations should be able to adopt approaches to protect consumers and patients in ways that still align to the views of regulators. One step in that direction would be the use of industrywide self-regulation to keep a watchful eye on the interaction of next generation tools with newly passed laws to shape superior AI-powered healthcare experiences, whether online or in-person.
Companies should position themselves for success and leverage the independent review process to demonstrate robust health privacy and AI practices to the marketplace. BBB National Programs serves as a unifying voice that allows businesses and healthcare organizations to signal to consumers and regulators that they have taken steps to use AI responsibly.