Most organizations are familiar with the concept of “shadow IT,” a term that describes any technological solution used on an enterprise network without prior approval or oversight from the IT department.
It’s a reality of the modern workplace: Employees who feel that it’s too burdensome to involve IT in signing up for a new cloud-based service, for instance, may use a personal account to do their work, not thinking about compliance or security concerns.
Now, with the proliferation of solutions that use generative artificial intelligence, signing up for a service has never been easier. But if organizations don’t have an AI governance structure in place — with established rules on which tools to use — they’ll run into a similar issue with shadow AI.
Think of shadow AI, then, as the next iteration of shadow IT. Here are a few tips on how organizations can mitigate the risks it poses, especially in an industry with strict requirements for data security and privacy.
DISCOVER: Here's how healthcare organizations can implement AI responsibly.
1. Adopt Well-Defined AI Governance
This should be the No. 1 priority. The right governance framework can help IT teams make definitive decisions on the appropriate use of AI solutions. Define the processes and enforce them. Include a multidisciplinary team of stakeholders so that decisions keep a high-level, organizational outlook in mind rather than a siloed, individual approach. This team can also ensure that the process for trying new solutions is not overly bureaucratic, so as not to impede innovation. For example, a group of clinicians who want to use OpenEvidence on the enterprise network should not be immediately shut down; instead, work with them to set a policy for its use. A balanced approach is key.
Shadow AI is a symptom of immature AI governance. As you mature your AI governance, you also reduce shadow AI because you can bring solutions into the fold and facilitate discussions with stakeholders. This improves the decision-making process and security around AI.
Click the banner below to read the new CDW Artificial Intelligence Research Report.
2. Employ Technical Guardrails to Monitor AI Use
IT teams should have tools for monitoring whether staff members are accessing unauthorized applications and should be prepared to limit their use. Consider offering a sandbox environment so employees can test AI solutions in a controlled setting.
3. Clearly Communicate Goals and Measure ROI
Collaboration is another critical aspect of mitigating shadow AI. You don’t want AI adoption to be solely an IT-led program; you need buy-in from the staff members who’ll be using the solution. It helps to have a well-defined use case and a clear statement about the solution, how it’s going to be used, by whom and why it’s good for the organization. Having that communication will reduce the need for shadow AI as team members understand the benefits of moving in a more coordinated, centralized way.
This article is part of HealthTech’s MonITor blog series.