1. Solve Problems Instead of Deploying New Tools for Their Own Sake
Organizations may feel pressured to try something just because of hype. You should resist the urge. Instead, be clear about a problem that needs to be solved and how AI would fit in.
There may be some easy deployments to start with through capabilities or features that exist in solutions that your organization is already using, such as within a productivity software suite or an electronic health records system.
Another problem area could be any repetitive administrative tasks that would benefit from automation. One of the reasons that ambient listening tools have held consistent interest is because organizations want to reduce clinician burden and mitigate burnout. How can health systems reduce “pajama time” for clinicians so that they can repair patient relationships?
READ MORE: Take advantage of data and artificial intelligence for better healthcare outcomes.
2. Amid Regulatory Uncertainty, Have a Solid AI Governance Structure
As algorithms improve and regulatory responses remain in flux, healthcare organizations need to have agility and stability in their own AI governance structure. And with requirements that can vary state by state, a multidisciplinary approach is crucial to keep up with changes.
Create the proper work groups with the right representation of stakeholders to ask the right questions around potential use cases, the end-user experience, recognizing and mitigating risk, ethical concerns, algorithmic bias, compliance, and data quality.
Infrastructure considerations also need to be factored in. How ready is your organization to adopt more AI solutions? Do your teams have the right skill sets? Have you secured your environment? Are there any on-premises considerations versus workloads that should move to the cloud? Organizations will need to build out landing zones and may have different strategies when it comes to how they are using their compute and storage.
3. Keep Data Security and Privacy at the Forefront
Data governance goes hand in hand with AI governance, as most AI-powered solutions require high-quality data, which is table stakes at this point. This also requires strategies around how to protect that data.
There is also a need to have more transparency in some of the solutions that are out there so organizations can adequately assess whether a solution is going to meet regulatory requirements. Transparency is key, as real danger exists if an AI solution gets a prediction wrong or poor data is used. A one-size-fits-all approach to AI in healthcare is just not possible, and there will likely still be a need for human discernment or a human in the loop to ensure outcomes are not causing harm.
This article is part of HealthTech’s MonITor blog series.