Skip to content
Tiatra, LLCTiatra, LLC
Tiatra, LLC
Information Technology Solutions for Washington, DC Government Agencies
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact
 
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact

Shadow AI is on the rise. Here’s how to turn it into a strategic advantage

The risks of sharing legal information, financial data and sensitive code with shadow AI (aka, unauthorized generative AI tools) cannot be understated.

A single data leak can lead to compliance violations, loss of invaluable IP and a decrease in public trust. Nevertheless, according to a recent study, working professionals in the U.S. and Canada aren’t overly concerned about their usage of shadow AI.

In fact, the vast majority (91%) of surveyed employees said they believe that shadow AI poses no risk, very little risk or some risk that’s outweighed by the reward. Perhaps even more disturbing, over a third of employees admitted to sharing sensitive information with these unauthorized AI tools.

Of the employees sharing data with shadow AI, 32% shared non-public product information; another 33% shared confidential client information; and 37% shared internal documents related to strategy or financial data. Were this sensitive data to leave the organization, the damage could be devastating and long-lasting.

Despite the risks, shadow AI is increasingly prevalent

According to the study, which featured 350 IT decision-makers (ITDMs) and 350 working professionals across enterprises in the US and Canada, shadow AI is definitely on the rise. A whopping 93% of employees admitted to inputting data into generative AI tools without corporate approval. What’s more, 60% of employees said they are using unapproved AI tools more than they were a year ago. 

Across the board, ITDMs and working professionals are seeing an increase in shadow AI. In North America, 70% of ITDMs reported seeing unauthorized AI use in their organizations, and 82% of US-based employees said they knew coworkers who used AI tools without authorization.

The impetuses to use unsanctioned AI tools are varied. Summarizing meeting notes and calls (56%) is a popular use case, as is idea brainstorming (55%), analyzing data and reports (47%), drafting or editing emails and documents (47%) and generating client-facing content (34%).

Not only does this study highlight a rise in shadow AI usage and the related security concerns, but it also points out a general lack of adequate governance.

Governance concerns and leadership blind spots

Unlike the working employees (of whom, 91% see little to no risk in using shadow AI), nearly all ITDMs (97%) acknowledge that the use of shadow AI poses significant risks to their enterprises. Most ITDMs (63%) say potential data leakage is the primary risk of shadow AI; however, risks related to hallucinations, discrimination and lack of explainability are prevalent as well.

Although ITDMs have approved some AI solutions for employee use — genAI text tools (73%), AI writing tools (60%) and code assistants (59%) — the ITDMs are playing both catch-up and whack-a-mole when it comes to shadow AI governance.

Most ITDMs (85%) report that employees are adopting AI faster than their IT teams can assess the tools, and more than half (53%)  believe that their employees’ use of personal devices for work-related AI tasks is creating blind spots in their organization’s security posture. Given this precarious situation, enterprises should have clear, enforceable AI governance policies in place. That said, it appears that few actually do; only 54% of ITDMs say their policies on unauthorized AI use in the organization are effective.

Transforming the IT department from a gatekeeper into an enabler

Although this study emphasizes the prevalence of shadow AI and its corresponding security risks, there is an underlying opportunity here. If implemented into organizations correctly, generative AI tools can provide a strategic edge. By building transparent, collaborative and secure AI ecosystems, IT teams can help their employees work faster and more efficiently while also securing sensitive data and minimizing risks related to data leaks and compliance violations.

The first step is to assess how employees are using generative AI tools. Once AI usage patterns are established, create an official list of sanctioned tools. During the vendor due diligence process, consider utilizing API access to cloud-based AI tools that offer robust security, data control and compliance measures.

Another approach, which might be prohibitively expensive for smaller organizations, is to build a proprietary AI stack in-house. Some organizations may opt to build customized, in-house models on top of open-source models from the likes of Anthropic, OpenAI, Meta (Llama) or DeepSeek; then, they can further enhance these models via (RAG) retrieval-augmented generation. By going this route, one can ensure that all sensitive corporate data remains inside the network.

After assessing employees’ AI usage, conducting vendor due diligence and getting a model up and running, guardrails must be put in place. This entails auditing model outputs, creating role-based access controls and flagging any unauthorized access in real-time.

Rectify any disconnect between IT personnel and senior leadership

In order to establish organization-wide AI alignment, everyone should be on the same page. Unfortunately, this is rarely the case. According to the recent study, 90% of employees trust shadow AI tools to protect their data, and 50% believe there’s little to no risk in using these unapproved tools.

To be sure, AI training programs are needed to educate employees about the risks inherent in using unsanctioned AI tools. Also, consider creating AI sandboxes, where employees can test out new AI tools, and reward personnel who follow generative AI best practices.

Given that only 31% of ITDMs believe that senior leaders from other departments fully understand the risks posed by shadow AI, it is clear that senior leadership needs education as well. This current disconnect between ITDMs and other executives creates an untenable governance vacuum. Everyone needs to get on the same page.

The main takeaway is that shadow AI poses a bevy of threats, not least of which is the potential for data breaches that expose sensitive data. As the ManageEngine study showed, 32% of employees admitted to entering confidential client data into AI tools without confirming company approval, and another 37% admitted to entering private, internal company data into such tools.

The danger is palpable, but so is the opportunity. If IT leaders can shift from playing defense to building secure AI ecosystems that employees feel empowered to use, a strategic advantage can be reached.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?


Read More from This Article: Shadow AI is on the rise. Here’s how to turn it into a strategic advantage
Source: News

Category: NewsSeptember 9, 2025
Tags: art

Post navigation

PreviousPrevious post:SAP change management still challenges enterprisesNextNext post:AI for data and data for AI: Developing new age architecture

Related posts

オプトインからオプトアウトへ―次世代医療基盤法が変えた医療データのルール
December 13, 2025
AI ROI: How to measure the true value of AI
December 13, 2025
Analytics capability: The new differentiator for modern CIOs
December 12, 2025
Stop running two architectures
December 12, 2025
法令だけでは足りない―医療情報ガイドラインと医療DXのリアル
December 12, 2025
SaaS price hikes put CIOs’ budgets in a bind
December 12, 2025
Recent Posts
  • オプトインからオプトアウトへ―次世代医療基盤法が変えた医療データのルール
  • AI ROI: How to measure the true value of AI
  • Analytics capability: The new differentiator for modern CIOs
  • Stop running two architectures
  • 法令だけでは足りない―医療情報ガイドラインと医療DXのリアル
Recent Comments
    Archives
    • December 2025
    • November 2025
    • October 2025
    • September 2025
    • August 2025
    • July 2025
    • June 2025
    • May 2025
    • April 2025
    • March 2025
    • February 2025
    • January 2025
    • December 2024
    • November 2024
    • October 2024
    • September 2024
    • August 2024
    • July 2024
    • June 2024
    • May 2024
    • April 2024
    • March 2024
    • February 2024
    • January 2024
    • December 2023
    • November 2023
    • October 2023
    • September 2023
    • August 2023
    • July 2023
    • June 2023
    • May 2023
    • April 2023
    • March 2023
    • February 2023
    • January 2023
    • December 2022
    • November 2022
    • October 2022
    • September 2022
    • August 2022
    • July 2022
    • June 2022
    • May 2022
    • April 2022
    • March 2022
    • February 2022
    • January 2022
    • December 2021
    • November 2021
    • October 2021
    • September 2021
    • August 2021
    • July 2021
    • June 2021
    • May 2021
    • April 2021
    • March 2021
    • February 2021
    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    Categories
    • News
    Meta
    • Log in
    • Entries feed
    • Comments feed
    • WordPress.org
    Tiatra LLC.

    Tiatra, LLC, based in the Washington, DC metropolitan area, proudly serves federal government agencies, organizations that work with the government and other commercial businesses and organizations. Tiatra specializes in a broad range of information technology (IT) development and management services incorporating solid engineering, attention to client needs, and meeting or exceeding any security parameters required. Our small yet innovative company is structured with a full complement of the necessary technical experts, working with hands-on management, to provide a high level of service and competitive pricing for your systems and engineering requirements.

    Find us on:

    FacebookTwitterLinkedin

    Submitclear

    Tiatra, LLC
    Copyright 2016. All rights reserved.