Skip to content
Tiatra, LLCTiatra, LLC
Tiatra, LLC
Information Technology Solutions for Washington, DC Government Agencies
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact
 
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact

6 strategies for CIOs to effectively manage shadow AI

As employees experiment with gen AI tools on their own, CIOs are facing a familiar challenge with shadow AI. Although it’s often well-intentioned innovation, it can create serious risks around data privacy, compliance, and security.

According to 1Password’s 2025 annual report, The Access-Trust Gap, shadow AI increases an organization’s risk as 43% of employees use AI apps to do work on personal devices, while 25% use unapproved AI apps at work.

Despite these risks, experts say shadow AI isn’t something to do away with completely. Rather, it’s something to understand, guide, and manage. Here are six strategies that can help CIOs encourage responsible experimentation while keeping sensitive data safe.

1. Establish clear guardrails with room to experiment

Managing shadow AI begins with getting clear on what’s allowed and what isn’t. Danny Fisher, chief technology officer at West Shore Home, recommends that CIOs classify AI tools into three simple categories: approved, restricted, and forbidden.

“Approved tools are vetted and supported,” he says. “Restricted tools can be used in a controlled space with clear limits, like only using dummy data. Forbidden tools, which are typically public or unencrypted AI systems, should be blocked at the network or API level.”

Matching each type of AI use with a safe testing space, such as an internal OpenAI workspace or a secure API proxy, lets teams experiment freely without risking company data, he adds.

Jason Taylor, principal enterprise architect at LeanIX, an SAP company, says clear rules are essential in today’s fast-moving AI world.

“Be clear which tools and platforms are approved and which ones aren’t,” he says. “Also be clear which scenarios and use cases are approved versus not, and how employees are allowed to work with company data and information when using AI like, for example, one-time upload as opposed to cut-and-paste or deeper integration.”

Taylor adds that companies should also create a clear list that explains which types of data are or aren’t safe to use, and in what situations. A modern data loss prevention tool can help by automatically finding and labeling data, and enforcing least-privilege or zero-trust rules on who can access what.

Patty Patria, CIO at Babson College, notes it’s also important for CIOs to establish specific guardrails for no-code/low-code AI tools and vibe-coding platforms.

“These tools empower employees to quickly prototype ideas and experiment with AI-driven solutions, but they also introduce unique risks when connecting to proprietary or sensitive data,” she says.

To deal with this, Patria says companies should set up security layers that let people experiment safely on their own but require extra review and approval whenever someone wants to connect an AI tool to sensitive systems.

“For example, we’ve recently developed clear internal guidance for employees outlining when to involve the security team for application review and when these tools can be used autonomously, ensuring both innovation and data protection are prioritized,” she says. “We also maintain a list of AI tools we support, and which we don’t recommend if they’re too risky.”

2. Maintain continuous visibility and inventory tracking

CIOs can’t manage what they can’t see. Experts say maintaining an accurate, up-to-date inventory of AI tools is one of the most important defenses against shadow AI.

“The most important thing is creating a culture where employees feel comfortable sharing what they use rather than hiding it,” says Fisher. His team combines quarterly surveys with a self-service registry where employees log the AI tools they use. IT then validates those entries through network scans and API monitoring.

Ari Harrison, VP of IT at branding manufacturer Bamko, says his team takes a layered approach to maintaining visibility.

“We maintain a living registry of connected applications by pulling from Google Workspace’s connected-apps view and piping those events into our SIEM [security information and event management system],” he says. “Microsoft 365 offers similar telemetry, and cloud access security broker tools can supplement visibility where needed.”

That layered approach gives Bamko a clear map of which AI tools are touching corporate data, who authorized them, and what permissions they have.

Mani Gill, SVP of product at cloud-based iPaaS Boomi, argues that manual audits are no longer enough.

“Effective inventory management requires moving beyond periodic audits to continuous, automated visibility across the entire data ecosystem,” he says, adding that good governance policies ensure all AI agents, whether approved or built into other tools, send their data in and out through one central platform. This gives organizations instant, real-time visibility into what each agent is doing, how much data it’s using, and whether it’s following the rules.

Tanium chief security advisor Tim Morris agrees that continuous discovery across every device and application is key. “AI tools can pop up overnight,” he says. “If a new AI app or browser plugin appears in your environment, you should know about it immediately.”

3. Strengthen data protection and access controls

When it comes to securing data from shadow AI exposure, experts point to the same foundation: data loss prevention (DLP), encryption, and least privilege.

“Use DLP rules to block uploads of personal information, contracts, or source code to unapproved domains,” Fisher says. He also recommends masking sensitive data before it leaves the organization, and turning on logging and audit trails to track every prompt and response in approved AI tools.

Harrison echoes that approach, noting that Bamko focuses on the security controls that matter most in practice: Outbound DLP and content inspection to prevent sensitive data from leaving; OAuth governance to keep third-party permissions to least privilege; and access limits that restrict uploads of confidential data to only approved AI connectors within its productivity suite.

In addition, the company treats broad permissions, such as read and write access to documents or email, as high-risk and requires explicit approval, while narrow, read-only permissions can move faster, Harrison adds.

“The goal is to allow safe day-to-day creativity while reducing the chance of a single click granting an AI tool more power than intended,” he says.

Taylor adds that security must be consistent across environments. “Encrypt all sensitive data at rest, in use, and in motion, employ least-privilege and zero-trust policies for data access permissions, and ensure DLP systems can scan for, tag, and protect sensitive data.”

He notes that companies should ensure these controls work the same on desktop, mobile, and web, and keep checking and updating them as new situations come up.

4. Clearly define and communicate risk tolerance

Defining risk tolerance is as much about communication as it is about control. Fisher advises CIOs to tie risk tolerance to data classification instead of opinion. His team uses a simple color-coded system: green for low-risk activities, such as marketing content; yellow for internal documents that must use approved tools; and red for customer or financial data that can’t be used with AI systems.

“Risk tolerance should be grounded in business value and regulatory obligation,” says Morris. Like Fisher, Morris recommends classifying AI use into clear categories, what’s permitted, what needs approval, and what’s prohibited, and communicating that framework through leadership briefings, onboarding, and internal portals.

Patria says Babson’s AI Governance Committee plays a key role in this process. “When potential risks emerge, we bring them to the committee for discussion and collaboratively develop mitigation strategies,” she says. “In some cases, we’ve decided to block tools for staff but permit them for classroom use. That balance helps manage risk without stifling innovation.”

5. Foster transparency and a culture of trust

Transparency is the key to managing shadow AI well. Employees need to know what’s being monitored and why.

“Transparency means employees always know what’s allowed, what’s being monitored, and why,” Fisher says. “Publish your governance approach on the company intranet and include real examples of both good and risky AI use. It’s not about catching people. You’re building confidence that utilizing AI is safe and fair.”

Taylor recommends publishing a list of officially sanctioned AI offerings and keeping it updated. “Be clear about the roadmap for delivering capabilities that aren’t yet available,” he says, and provide a process to request exceptions or new tools. That openness shows governance exists to support innovation, not hinder it.

Patria says in addition to technical controls and clear policies, establishing dedicated governance groups, like the AI Governance Committee, can greatly enhance an organization’s ability to manage shadow AI risks.

“When potential risks emerge, such as concerns about tools like DeepSeek and Fireflies.AI, we collaboratively develop mitigation strategies,” she says.

This governance group not only looks at and handles risks, but explains its decisions and the reasons behind them, helping create transparency and shared responsibility, Patria adds.

Morris agrees. “Transparency means there are no surprises. Employees should know which AI tools are approved, how decisions are made, and where to go with questions or new ideas,” he says.

6. Build continuous, role-based AI training

Training is one of the most effective ways to prevent accidental misuse of AI tools. The key is be succinct, relevant, and recurring.

“Keep training short, visual, and role-specific,” says Fisher. “Avoid long slide decks and use stories, quick demos, and clear examples instead.”

Patria says Babson integrates AI risk awareness into annual information security training, and sends periodic newsletters about new tools and emerging risks.

“Routine training sessions are offered to ensure employees understand approved AI tools and emerging risks, while departmental AI champions are encouraged to facilitate dialogue and share practical experiences, highlighting both the benefits and potential pitfalls of AI adoption,” she adds.

Taylor recommends embedding training in-browser, so employees learn best practices directly in the tools they’re using. “Cutting and pasting into a web browser or dragging and dropping a presentation seems innocuous until your sensitive data has left your ecosystem,” he says.

Gill notes that training should connect responsible use with performance outcomes.

“Employees need to understand that compliance and productivity work together,” he says. “Approved tools deliver faster results, better data accuracy, and fewer security incidents compared with shadow AI. Role-based, ongoing training can demonstrate how guardrails and governance protect both data and efficiency, ensuring that AI accelerates workflows rather than creating risk.”

Responsible AI use is good business

Ultimately, managing shadow AI isn’t just about reducing risk, it’s about supporting responsible innovation. CIOs who focus on trust, communication, and transparency can turn a potential problem into a competitive advantage.

“People generally don’t try and buck the system when the system is giving them what they’re looking for, especially when there’s more friction for the user in taking the shadow AI approach,” says Taylor.

Morris concurs. “The goal isn’t to scare people but to make them think before they act,” he says. “If they know the approved path is easy and safe, they’ll take it.”

That’s the future CIOs should work toward: a place where people can innovate safely, feel trusted to experiment, and keep data protected because responsible AI use isn’t just compliance, it’s good business.


Read More from This Article: 6 strategies for CIOs to effectively manage shadow AI
Source: News

Category: NewsNovember 28, 2025
Tags: art

Post navigation

PreviousPrevious post:ロボタクシーの夢の先へ――アメリカのモビリティスタートアップはいま何を目指しているのかNextNext post:¿Cómo usa LinkedIn la IA para mejorar la búsqueda de empleo en su plataforma?

Related posts

What is CMMI? A model to optimize development processes
May 15, 2026
The biggest mistakes CIOs make in the boardroom — and how to avoid them
May 15, 2026
How AI is transforming software development
May 15, 2026
From cautious to scaling: SAP customers span the AI readiness spectrum
May 15, 2026
AI 시대 CIO, ‘생존 시험대’ 올랐다…조직 혁신·AI 역량이 성패 좌우
May 15, 2026
앤트로픽, 클로드 에이전트 과금 전환…‘무제한 AI’ 시대 막 내리나
May 15, 2026
Recent Posts
  • What is CMMI? A model to optimize development processes
  • The biggest mistakes CIOs make in the boardroom — and how to avoid them
  • How AI is transforming software development
  • From cautious to scaling: SAP customers span the AI readiness spectrum
  • AI 시대 CIO, ‘생존 시험대’ 올랐다…조직 혁신·AI 역량이 성패 좌우
Recent Comments
    Archives
    • May 2026
    • April 2026
    • March 2026
    • February 2026
    • January 2026
    • December 2025
    • November 2025
    • October 2025
    • September 2025
    • August 2025
    • July 2025
    • June 2025
    • May 2025
    • April 2025
    • March 2025
    • February 2025
    • January 2025
    • December 2024
    • November 2024
    • October 2024
    • September 2024
    • August 2024
    • July 2024
    • June 2024
    • May 2024
    • April 2024
    • March 2024
    • February 2024
    • January 2024
    • December 2023
    • November 2023
    • October 2023
    • September 2023
    • August 2023
    • July 2023
    • June 2023
    • May 2023
    • April 2023
    • March 2023
    • February 2023
    • January 2023
    • December 2022
    • November 2022
    • October 2022
    • September 2022
    • August 2022
    • July 2022
    • June 2022
    • May 2022
    • April 2022
    • March 2022
    • February 2022
    • January 2022
    • December 2021
    • November 2021
    • October 2021
    • September 2021
    • August 2021
    • July 2021
    • June 2021
    • May 2021
    • April 2021
    • March 2021
    • February 2021
    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    Categories
    • News
    Meta
    • Log in
    • Entries feed
    • Comments feed
    • WordPress.org
    Tiatra LLC.

    Tiatra, LLC, based in the Washington, DC metropolitan area, proudly serves federal government agencies, organizations that work with the government and other commercial businesses and organizations. Tiatra specializes in a broad range of information technology (IT) development and management services incorporating solid engineering, attention to client needs, and meeting or exceeding any security parameters required. Our small yet innovative company is structured with a full complement of the necessary technical experts, working with hands-on management, to provide a high level of service and competitive pricing for your systems and engineering requirements.

    Find us on:

    FacebookTwitterLinkedin

    Submitclear

    Tiatra, LLC
    Copyright 2016. All rights reserved.