As employees experiment with gen AI tools on their own, CIOs are facing a familiar challenge with shadow AI. Although it’s often well-intentioned innovation, it can create serious risks around data privacy, compliance, and security.
According to 1Password’s 2025 annual report, The Access-Trust Gap, shadow AI increases an organization’s risk as 43% of employees use AI apps to do work on personal devices, while 25% use unapproved AI apps at work.
Despite these risks, experts say shadow AI isn’t something to do away with completely. Rather, it’s something to understand, guide, and manage. Here are six strategies that can help CIOs encourage responsible experimentation while keeping sensitive data safe.
1. Establish clear guardrails with room to experiment
Managing shadow AI begins with getting clear on what’s allowed and what isn’t. Danny Fisher, chief technology officer at West Shore Home, recommends that CIOs classify AI tools into three simple categories: approved, restricted, and forbidden.
“Approved tools are vetted and supported,” he says. “Restricted tools can be used in a controlled space with clear limits, like only using dummy data. Forbidden tools, which are typically public or unencrypted AI systems, should be blocked at the network or API level.”
Matching each type of AI use with a safe testing space, such as an internal OpenAI workspace or a secure API proxy, lets teams experiment freely without risking company data, he adds.
Jason Taylor, principal enterprise architect at LeanIX, an SAP company, says clear rules are essential in today’s fast-moving AI world.
“Be clear which tools and platforms are approved and which ones aren’t,” he says. “Also be clear which scenarios and use cases are approved versus not, and how employees are allowed to work with company data and information when using AI like, for example, one-time upload as opposed to cut-and-paste or deeper integration.”
Taylor adds that companies should also create a clear list that explains which types of data are or aren’t safe to use, and in what situations. A modern data loss prevention tool can help by automatically finding and labeling data, and enforcing least-privilege or zero-trust rules on who can access what.
Patty Patria, CIO at Babson College, notes it’s also important for CIOs to establish specific guardrails for no-code/low-code AI tools and vibe-coding platforms.
“These tools empower employees to quickly prototype ideas and experiment with AI-driven solutions, but they also introduce unique risks when connecting to proprietary or sensitive data,” she says.
To deal with this, Patria says companies should set up security layers that let people experiment safely on their own but require extra review and approval whenever someone wants to connect an AI tool to sensitive systems.
“For example, we’ve recently developed clear internal guidance for employees outlining when to involve the security team for application review and when these tools can be used autonomously, ensuring both innovation and data protection are prioritized,” she says. “We also maintain a list of AI tools we support, and which we don’t recommend if they’re too risky.”
2. Maintain continuous visibility and inventory tracking
CIOs can’t manage what they can’t see. Experts say maintaining an accurate, up-to-date inventory of AI tools is one of the most important defenses against shadow AI.
“The most important thing is creating a culture where employees feel comfortable sharing what they use rather than hiding it,” says Fisher. His team combines quarterly surveys with a self-service registry where employees log the AI tools they use. IT then validates those entries through network scans and API monitoring.
Ari Harrison, VP of IT at branding manufacturer Bamko, says his team takes a layered approach to maintaining visibility.
“We maintain a living registry of connected applications by pulling from Google Workspace’s connected-apps view and piping those events into our SIEM [security information and event management system],” he says. “Microsoft 365 offers similar telemetry, and cloud access security broker tools can supplement visibility where needed.”
That layered approach gives Bamko a clear map of which AI tools are touching corporate data, who authorized them, and what permissions they have.
Mani Gill, SVP of product at cloud-based iPaaS Boomi, argues that manual audits are no longer enough.
“Effective inventory management requires moving beyond periodic audits to continuous, automated visibility across the entire data ecosystem,” he says, adding that good governance policies ensure all AI agents, whether approved or built into other tools, send their data in and out through one central platform. This gives organizations instant, real-time visibility into what each agent is doing, how much data it’s using, and whether it’s following the rules.
Tanium chief security advisor Tim Morris agrees that continuous discovery across every device and application is key. “AI tools can pop up overnight,” he says. “If a new AI app or browser plugin appears in your environment, you should know about it immediately.”
3. Strengthen data protection and access controls
When it comes to securing data from shadow AI exposure, experts point to the same foundation: data loss prevention (DLP), encryption, and least privilege.
“Use DLP rules to block uploads of personal information, contracts, or source code to unapproved domains,” Fisher says. He also recommends masking sensitive data before it leaves the organization, and turning on logging and audit trails to track every prompt and response in approved AI tools.
Harrison echoes that approach, noting that Bamko focuses on the security controls that matter most in practice: Outbound DLP and content inspection to prevent sensitive data from leaving; OAuth governance to keep third-party permissions to least privilege; and access limits that restrict uploads of confidential data to only approved AI connectors within its productivity suite.
In addition, the company treats broad permissions, such as read and write access to documents or email, as high-risk and requires explicit approval, while narrow, read-only permissions can move faster, Harrison adds.
“The goal is to allow safe day-to-day creativity while reducing the chance of a single click granting an AI tool more power than intended,” he says.
Taylor adds that security must be consistent across environments. “Encrypt all sensitive data at rest, in use, and in motion, employ least-privilege and zero-trust policies for data access permissions, and ensure DLP systems can scan for, tag, and protect sensitive data.”
He notes that companies should ensure these controls work the same on desktop, mobile, and web, and keep checking and updating them as new situations come up.
4. Clearly define and communicate risk tolerance
Defining risk tolerance is as much about communication as it is about control. Fisher advises CIOs to tie risk tolerance to data classification instead of opinion. His team uses a simple color-coded system: green for low-risk activities, such as marketing content; yellow for internal documents that must use approved tools; and red for customer or financial data that can’t be used with AI systems.
“Risk tolerance should be grounded in business value and regulatory obligation,” says Morris. Like Fisher, Morris recommends classifying AI use into clear categories, what’s permitted, what needs approval, and what’s prohibited, and communicating that framework through leadership briefings, onboarding, and internal portals.
Patria says Babson’s AI Governance Committee plays a key role in this process. “When potential risks emerge, we bring them to the committee for discussion and collaboratively develop mitigation strategies,” she says. “In some cases, we’ve decided to block tools for staff but permit them for classroom use. That balance helps manage risk without stifling innovation.”
5. Foster transparency and a culture of trust
Transparency is the key to managing shadow AI well. Employees need to know what’s being monitored and why.
“Transparency means employees always know what’s allowed, what’s being monitored, and why,” Fisher says. “Publish your governance approach on the company intranet and include real examples of both good and risky AI use. It’s not about catching people. You’re building confidence that utilizing AI is safe and fair.”
Taylor recommends publishing a list of officially sanctioned AI offerings and keeping it updated. “Be clear about the roadmap for delivering capabilities that aren’t yet available,” he says, and provide a process to request exceptions or new tools. That openness shows governance exists to support innovation, not hinder it.
Patria says in addition to technical controls and clear policies, establishing dedicated governance groups, like the AI Governance Committee, can greatly enhance an organization’s ability to manage shadow AI risks.
“When potential risks emerge, such as concerns about tools like DeepSeek and Fireflies.AI, we collaboratively develop mitigation strategies,” she says.
This governance group not only looks at and handles risks, but explains its decisions and the reasons behind them, helping create transparency and shared responsibility, Patria adds.
Morris agrees. “Transparency means there are no surprises. Employees should know which AI tools are approved, how decisions are made, and where to go with questions or new ideas,” he says.
6. Build continuous, role-based AI training
Training is one of the most effective ways to prevent accidental misuse of AI tools. The key is be succinct, relevant, and recurring.
“Keep training short, visual, and role-specific,” says Fisher. “Avoid long slide decks and use stories, quick demos, and clear examples instead.”
Patria says Babson integrates AI risk awareness into annual information security training, and sends periodic newsletters about new tools and emerging risks.
“Routine training sessions are offered to ensure employees understand approved AI tools and emerging risks, while departmental AI champions are encouraged to facilitate dialogue and share practical experiences, highlighting both the benefits and potential pitfalls of AI adoption,” she adds.
Taylor recommends embedding training in-browser, so employees learn best practices directly in the tools they’re using. “Cutting and pasting into a web browser or dragging and dropping a presentation seems innocuous until your sensitive data has left your ecosystem,” he says.
Gill notes that training should connect responsible use with performance outcomes.
“Employees need to understand that compliance and productivity work together,” he says. “Approved tools deliver faster results, better data accuracy, and fewer security incidents compared with shadow AI. Role-based, ongoing training can demonstrate how guardrails and governance protect both data and efficiency, ensuring that AI accelerates workflows rather than creating risk.”
Responsible AI use is good business
Ultimately, managing shadow AI isn’t just about reducing risk, it’s about supporting responsible innovation. CIOs who focus on trust, communication, and transparency can turn a potential problem into a competitive advantage.
“People generally don’t try and buck the system when the system is giving them what they’re looking for, especially when there’s more friction for the user in taking the shadow AI approach,” says Taylor.
Morris concurs. “The goal isn’t to scare people but to make them think before they act,” he says. “If they know the approved path is easy and safe, they’ll take it.”
That’s the future CIOs should work toward: a place where people can innovate safely, feel trusted to experiment, and keep data protected because responsible AI use isn’t just compliance, it’s good business.
Read More from This Article: 6 strategies for CIOs to effectively manage shadow AI
Source: News

