Skip to content
Tiatra, LLCTiatra, LLC
Tiatra, LLC
Information Technology Solutions for Washington, DC Government Agencies
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact
 
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact

10 ways to prevent shadow AI disaster

Like all technology-related things, shadow IT has evolved.

No longer just a SaaS app handling some worker’s niche need or a few personal BlackBerries snuck in by sales to access work files on the go, shadow IT today is more likely to involve AI, as employees test out all sorts of AI tools without the knowledge or blessing of IT.

The volume of shadow AI is staggering, according to research from Cyberhaven, a maker of data protection software. According to its spring 2024 AI Adoption and Risk Report, 74% of ChatGPT usage at work is through noncorporate accounts, 94% of Google Gemini usage is through noncorporate accounts, and 96% for Bard. As a result, unauthorized AI is eating your corporate data, thanks to employees who are feeding legal documents, HR data, source code, and other sensitive corporate information into AI tools that IT hasn’t approved for use.

Shadow AI is practically inevitable, says Arun Chandrasekaran, a distinguished vice president analyst at research firm Gartner. Workers are curious about AI tools, seeing them as a way to offload busy work and boost productivity. Others want to master their use, seeing that as a way to prevent being displaced by the technology. Others became comfortable with AI for personal tasks and now want the technology on the job.

What could go wrong?

Those reasons seem to make sense, Chandrasekaran acknowledges, but they don’t justify the risks that shadow AI creates for the organization.

“Most organizations want to avoid shadow AI because the risks are enormous,” he says.

For example, Chandrasekaran says, there is a good chance that sensitive data could be exposed, and that proprietary data could help an AI model (particularly if it’s open source) get smarter, thereby aiding competitors who may use the same model.

At the same time, many workers lack skills required to use AI effectively, further upping the risk level. They may not be skilled enough to feed the AI model the right data to generate quality outputs; prompt the model with the right inputs to produce optimal outputs; and verify the accuracy of the outputs. For example, workers can use generative AI to create computer code, but they can’t effectively check for problems in that code if they don’t understand the code’s syntax or logic. “That could be quite detrimental,” Chandrasekaran says.

Meanwhile, shadow AI could cause disruptions among the workforce, he says, as workers who are surreptitiously using AI could have an unfair advantage over those employees who have not brought in such tools. “It is not a dominant trend yet, but it is a concern we hear in our discussions [with organizational leaders],” Chandrasekaran says.

Shadow AI could introduce legal issues, too. For instance, the unsanctioned AI may have illegally accessed the intellectual property of others, leaving the organization answering for the infringement. It could introduce biased results that run afoul of antidiscrimination laws and company policies. Or it could produce faulty outputs that get passed onto customers and clients. All those scenarios could create liabilities for the organization, which would be on the hook for any violations or damages caused as a result.

Indeed, organizations are already facing consequences when AI systems fail. Case in point: A Canadian tribunal ruled in February 2024 that Air Canada is liable for misinformation given to a consumer by its AI chatbot.

The chatbot in that case was a sanctioned piece of technology, which IT leaders say just goes to show that risks are high enough for official bits of technology, why add even more by letting shadow IT go unchecked?

10 ways to head off disaster

Just as it was with shadow IT of yore, there’s no one-and-done solution that can prevent the use of unsanctioned AI technologies or the possible consequences of their use.

However, CIOs can adopt various strategies to help eliminate the use of unsanctioned AI, prevent disasters, and limit the blast radius if something does go awry. Here, IT leaders share 10 ways that CIO can do so.

1. Set an acceptable use policy for AI

A big first step is working with other executives to create an acceptable use policy that outlines when, where, and how AI can be used and reiterating the organization’s overall prohibitions against using tech that has not been approved by IT, says David Kuo, executive director of privacy compliance at Wells Fargo and a member of the Emerging Trends Working Group at the nonprofit governance association ISACA. Sounds obvious but most organizations don’t yet have one. A March 2024 ISACA poll of 3,270 digital trust professionals found that only 15% of organizations have AI policies (even as 70% of respondents said their staff use AI and 60% said employees are using genAI).

2. Build awareness about the risks and consequences

Kuo acknowledges the limits of Step 1: “You can set an acceptable use policy but people are going to break the rules.” So warn them about what can happen.

“There has to be more awareness across the organization about the risks of AI, and CIOs need to be more proactive about explaining the risks and spreading awareness about them across the organization,” says Sreekanth Menon, global leader for AI/ML services at Genpact, a global professional services and solutions firm. Outline the risks associated with AI in general as well as the heightened risks that come with the unsanctioned use of the technology.

Kuo adds: “It can’t be one-time training, and it can’t just say ‘Don’t do this.’ You have to educate your workforce. Tell them the problems that you might have with [shadow AI] and the consequences of their bad behavior.”

3. Manage expectations

Although AI adoption is rapidly rising, research shows that confidence in harnessing the power of intelligent technologies has gone down among executives, says Fawad Bajwa, global AI practice leader at Russell Reynolds Associates, a leadership advisory firm. Bajwa believes the decline is due in part to a mismatch between expectations for AI and what it actually can deliver.

He advises CIOs to educate on where, when, how, and to what extent AI can deliver value.

“Having that alignment across the organization on what you want to achieve will allow you to calibrate the confidence,” he says. That in turn could keep workers from chasing AI solutions on their own in the hopes of finding a panacea to all their problems.

4. Review, beef up access controls

One of the biggest risks around AI is data leakage, says Krishna Prasad, chief strategy officer and CIO at UST, a digital transformation solutions company.

Sure, that risk exists with planned AI deployments, but in those cases CIOs can work with business, data and security colleagues to mitigate risks. But they don’t have the same risk review and mitigation opportunities when workers deploy AI without their involvement, thereby upping the chances that sensitive data could be exposed.

To help head off such scenarios, Prasad advises tech, data, and security teams to review their data access policies and controls as well as their overall data loss prevention program and data monitoring capabilities to ensure they’re robust enough to prevent leakage with unsanctioned AI deployments.

5. Block access to AI tools

Another step that can help, Kuo says: blacklisting AI tools, such as OpenAI’s ChatGPT, and use firewall rules to prevent employees from using company systems to access. Have a firewall rule to prevent those tools from being accessed by company systems.

6. Enlist allies in the effort

CIOs shouldn’t be the only ones working to prevent shadow AI, Kuo says. They should be enlisting their C-suite colleagues — who all have a stake in protecting the organization against any negative consequences — and get them onboard with educating their staffers on the risks of using AI tools that go against official IT procurement and AI use policies.

“Better protection takes a village,” Kuo adds.

7. Create an IT AI roadmap that drives organizational priorities, strategies

Employees typically bring in technologies that they think can help them do their jobs better, not because they’re trying to hurt their employers. So CIOs can reduce the demand for unsanctioned AI by delivering the AI capabilities that best help workers achieve the priorities set for their roles.

Bajwa says CIOs should see this as an opportunity to lead their organizations into future successes by devising AI roadmaps that not only align to business priorities but actually shape strategies. “This is a business redefining moment,” Bajwa says.

8. Don’t be the ‘department of no’

Executive advisers say CIOs (and their C-suite colleagues) can’t drag their feet on AI adoption because it hurts the organization’s competitiveness and ups the chances of shadow AI. Yet that’s happening to some degree in many places, according to Genpact and HFS Research. Their May 2024 report revealed that 45% of organizations have adopted a “wait and watch” stance on genAI and 23% are “deniers” who are skeptical of genAI.

“Curtailing the use of AI is completely counterproductive today,” Prasad says. Instead, he says CIOs must enable AI capabilities offered within the platforms already in use in the enterprise, train workers to use and optimize those capabilities, and speed adoption of AI tools expected to deliver the best ROIs to reassure workers at all levels that IT has a commitment to an AI-enabled future.

9. Empower workers to use AI as they want

ISACA’s March survey found that 80% believe many jobs will be modified because of AI. If that’s the case, give workers the tools to use AI to make the modifications that will improve their jobs, says Beatriz Sanz Sáiz, global data and AI leader at EY Consulting.

She advises CIOs to give workers throughout their organizations (not just in IT) the tools and training to create or co-create with IT some of their own intelligent assistants. She also advises CIOs to build a flexible technology stack so they can quickly support and enable such efforts as well as pivot to new large language models (LLMs) and other intelligent components as worker demands arise — thereby making employees more likely to turn to IT (rather than external sources) to build solutions.

10. Be open to new, innovative uses

AI isn’t new, but the quickly escalating rate of adoption is showing more of its problems and potentials. CIOs who want to help their organizations harness the potentials (without all the problems) should be open-minded about new ways of using AI so employees don’t feel they need to go it alone.

Bajwa offers an example around AI hallucinations: Yes, hallucinations have gotten a nearly universal bad rap, but Bajwa points out that hallucinations could be useful in creative spaces such as marketing.

“Hallucinations can come up with ideas that none of us have thought about before,” he says.

CIOs who are open to such potentials and then enact the right guardrails, such as rules around what level of human oversight is required, will be more likely to have IT invited into such AI innovation rather than excluded from it. And isn’t that the goal?


Read More from This Article: 10 ways to prevent shadow AI disaster
Source: News

Category: NewsJuly 8, 2024
Tags: art

Post navigation

PreviousPrevious post:Año uno de la Consejería de Digitalización de Madrid: más de 100 proyectos incluyen IANextNext post:Request for proposal vs. request for partner: what works best for you?

Related posts

Barb Wixom and MIT CISR on managing data like a product
May 30, 2025
Avery Dennison takes culture-first approach to AI transformation
May 30, 2025
The agentic AI assist Stanford University cancer care staff needed
May 30, 2025
Los desafíos de la era de la ‘IA en todas partes’, a fondo en Data & AI Summit 2025
May 30, 2025
“AI 비서가 팀 단위로 지원하는 효과”···퍼플렉시티, AI 프로젝트 10분 완성 도구 ‘랩스’ 출시
May 30, 2025
“ROI는 어디에?” AI 도입을 재고하게 만드는 실패 사례
May 30, 2025
Recent Posts
  • Barb Wixom and MIT CISR on managing data like a product
  • Avery Dennison takes culture-first approach to AI transformation
  • The agentic AI assist Stanford University cancer care staff needed
  • Los desafíos de la era de la ‘IA en todas partes’, a fondo en Data & AI Summit 2025
  • “AI 비서가 팀 단위로 지원하는 효과”···퍼플렉시티, AI 프로젝트 10분 완성 도구 ‘랩스’ 출시
Recent Comments
    Archives
    • May 2025
    • April 2025
    • March 2025
    • February 2025
    • January 2025
    • December 2024
    • November 2024
    • October 2024
    • September 2024
    • August 2024
    • July 2024
    • June 2024
    • May 2024
    • April 2024
    • March 2024
    • February 2024
    • January 2024
    • December 2023
    • November 2023
    • October 2023
    • September 2023
    • August 2023
    • July 2023
    • June 2023
    • May 2023
    • April 2023
    • March 2023
    • February 2023
    • January 2023
    • December 2022
    • November 2022
    • October 2022
    • September 2022
    • August 2022
    • July 2022
    • June 2022
    • May 2022
    • April 2022
    • March 2022
    • February 2022
    • January 2022
    • December 2021
    • November 2021
    • October 2021
    • September 2021
    • August 2021
    • July 2021
    • June 2021
    • May 2021
    • April 2021
    • March 2021
    • February 2021
    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    Categories
    • News
    Meta
    • Log in
    • Entries feed
    • Comments feed
    • WordPress.org
    Tiatra LLC.

    Tiatra, LLC, based in the Washington, DC metropolitan area, proudly serves federal government agencies, organizations that work with the government and other commercial businesses and organizations. Tiatra specializes in a broad range of information technology (IT) development and management services incorporating solid engineering, attention to client needs, and meeting or exceeding any security parameters required. Our small yet innovative company is structured with a full complement of the necessary technical experts, working with hands-on management, to provide a high level of service and competitive pricing for your systems and engineering requirements.

    Find us on:

    FacebookTwitterLinkedin

    Submitclear

    Tiatra, LLC
    Copyright 2016. All rights reserved.