Like all technology-related things, shadow IT has evolved.
No longer just a SaaS app handling some worker’s niche need or a few personal BlackBerries snuck in by sales to access work files on the go, shadow IT today is more likely to involve AI, as employees test out all sorts of AI tools without the knowledge or blessing of IT.
The volume of shadow AI is staggering, according to research from Cyberhaven, a maker of data protection software. According to its spring 2024 AI Adoption and Risk Report, 74% of ChatGPT usage at work is through noncorporate accounts, 94% of Google Gemini usage is through noncorporate accounts, and 96% for Bard. As a result, unauthorized AI is eating your corporate data, thanks to employees who are feeding legal documents, HR data, source code, and other sensitive corporate information into AI tools that IT hasn’t approved for use.
Shadow AI is practically inevitable, says Arun Chandrasekaran, a distinguished vice president analyst at research firm Gartner. Workers are curious about AI tools, seeing them as a way to offload busy work and boost productivity. Others want to master their use, seeing that as a way to prevent being displaced by the technology. Others became comfortable with AI for personal tasks and now want the technology on the job.
What could go wrong?
Those reasons seem to make sense, Chandrasekaran acknowledges, but they don’t justify the risks that shadow AI creates for the organization.
“Most organizations want to avoid shadow AI because the risks are enormous,” he says.
For example, Chandrasekaran says, there is a good chance that sensitive data could be exposed, and that proprietary data could help an AI model (particularly if it’s open source) get smarter, thereby aiding competitors who may use the same model.
At the same time, many workers lack skills required to use AI effectively, further upping the risk level. They may not be skilled enough to feed the AI model the right data to generate quality outputs; prompt the model with the right inputs to produce optimal outputs; and verify the accuracy of the outputs. For example, workers can use generative AI to create computer code, but they can’t effectively check for problems in that code if they don’t understand the code’s syntax or logic. “That could be quite detrimental,” Chandrasekaran says.
Meanwhile, shadow AI could cause disruptions among the workforce, he says, as workers who are surreptitiously using AI could have an unfair advantage over those employees who have not brought in such tools. “It is not a dominant trend yet, but it is a concern we hear in our discussions [with organizational leaders],” Chandrasekaran says.
Shadow AI could introduce legal issues, too. For instance, the unsanctioned AI may have illegally accessed the intellectual property of others, leaving the organization answering for the infringement. It could introduce biased results that run afoul of antidiscrimination laws and company policies. Or it could produce faulty outputs that get passed onto customers and clients. All those scenarios could create liabilities for the organization, which would be on the hook for any violations or damages caused as a result.
Indeed, organizations are already facing consequences when AI systems fail. Case in point: A Canadian tribunal ruled in February 2024 that Air Canada is liable for misinformation given to a consumer by its AI chatbot.
The chatbot in that case was a sanctioned piece of technology, which IT leaders say just goes to show that risks are high enough for official bits of technology, why add even more by letting shadow IT go unchecked?
10 ways to head off disaster
Just as it was with shadow IT of yore, there’s no one-and-done solution that can prevent the use of unsanctioned AI technologies or the possible consequences of their use.
However, CIOs can adopt various strategies to help eliminate the use of unsanctioned AI, prevent disasters, and limit the blast radius if something does go awry. Here, IT leaders share 10 ways that CIO can do so.
1. Set an acceptable use policy for AI
A big first step is working with other executives to create an acceptable use policy that outlines when, where, and how AI can be used and reiterating the organization’s overall prohibitions against using tech that has not been approved by IT, says David Kuo, executive director of privacy compliance at Wells Fargo and a member of the Emerging Trends Working Group at the nonprofit governance association ISACA. Sounds obvious but most organizations don’t yet have one. A March 2024 ISACA poll of 3,270 digital trust professionals found that only 15% of organizations have AI policies (even as 70% of respondents said their staff use AI and 60% said employees are using genAI).
2. Build awareness about the risks and consequences
Kuo acknowledges the limits of Step 1: “You can set an acceptable use policy but people are going to break the rules.” So warn them about what can happen.
“There has to be more awareness across the organization about the risks of AI, and CIOs need to be more proactive about explaining the risks and spreading awareness about them across the organization,” says Sreekanth Menon, global leader for AI/ML services at Genpact, a global professional services and solutions firm. Outline the risks associated with AI in general as well as the heightened risks that come with the unsanctioned use of the technology.
Kuo adds: “It can’t be one-time training, and it can’t just say ‘Don’t do this.’ You have to educate your workforce. Tell them the problems that you might have with [shadow AI] and the consequences of their bad behavior.”
3. Manage expectations
Although AI adoption is rapidly rising, research shows that confidence in harnessing the power of intelligent technologies has gone down among executives, says Fawad Bajwa, global AI practice leader at Russell Reynolds Associates, a leadership advisory firm. Bajwa believes the decline is due in part to a mismatch between expectations for AI and what it actually can deliver.
He advises CIOs to educate on where, when, how, and to what extent AI can deliver value.
“Having that alignment across the organization on what you want to achieve will allow you to calibrate the confidence,” he says. That in turn could keep workers from chasing AI solutions on their own in the hopes of finding a panacea to all their problems.
4. Review, beef up access controls
One of the biggest risks around AI is data leakage, says Krishna Prasad, chief strategy officer and CIO at UST, a digital transformation solutions company.
Sure, that risk exists with planned AI deployments, but in those cases CIOs can work with business, data and security colleagues to mitigate risks. But they don’t have the same risk review and mitigation opportunities when workers deploy AI without their involvement, thereby upping the chances that sensitive data could be exposed.
To help head off such scenarios, Prasad advises tech, data, and security teams to review their data access policies and controls as well as their overall data loss prevention program and data monitoring capabilities to ensure they’re robust enough to prevent leakage with unsanctioned AI deployments.
5. Block access to AI tools
Another step that can help, Kuo says: blacklisting AI tools, such as OpenAI’s ChatGPT, and use firewall rules to prevent employees from using company systems to access. Have a firewall rule to prevent those tools from being accessed by company systems.
6. Enlist allies in the effort
CIOs shouldn’t be the only ones working to prevent shadow AI, Kuo says. They should be enlisting their C-suite colleagues — who all have a stake in protecting the organization against any negative consequences — and get them onboard with educating their staffers on the risks of using AI tools that go against official IT procurement and AI use policies.
“Better protection takes a village,” Kuo adds.
7. Create an IT AI roadmap that drives organizational priorities, strategies
Employees typically bring in technologies that they think can help them do their jobs better, not because they’re trying to hurt their employers. So CIOs can reduce the demand for unsanctioned AI by delivering the AI capabilities that best help workers achieve the priorities set for their roles.
Bajwa says CIOs should see this as an opportunity to lead their organizations into future successes by devising AI roadmaps that not only align to business priorities but actually shape strategies. “This is a business redefining moment,” Bajwa says.
8. Don’t be the ‘department of no’
Executive advisers say CIOs (and their C-suite colleagues) can’t drag their feet on AI adoption because it hurts the organization’s competitiveness and ups the chances of shadow AI. Yet that’s happening to some degree in many places, according to Genpact and HFS Research. Their May 2024 report revealed that 45% of organizations have adopted a “wait and watch” stance on genAI and 23% are “deniers” who are skeptical of genAI.
“Curtailing the use of AI is completely counterproductive today,” Prasad says. Instead, he says CIOs must enable AI capabilities offered within the platforms already in use in the enterprise, train workers to use and optimize those capabilities, and speed adoption of AI tools expected to deliver the best ROIs to reassure workers at all levels that IT has a commitment to an AI-enabled future.
9. Empower workers to use AI as they want
ISACA’s March survey found that 80% believe many jobs will be modified because of AI. If that’s the case, give workers the tools to use AI to make the modifications that will improve their jobs, says Beatriz Sanz Sáiz, global data and AI leader at EY Consulting.
She advises CIOs to give workers throughout their organizations (not just in IT) the tools and training to create or co-create with IT some of their own intelligent assistants. She also advises CIOs to build a flexible technology stack so they can quickly support and enable such efforts as well as pivot to new large language models (LLMs) and other intelligent components as worker demands arise — thereby making employees more likely to turn to IT (rather than external sources) to build solutions.
10. Be open to new, innovative uses
AI isn’t new, but the quickly escalating rate of adoption is showing more of its problems and potentials. CIOs who want to help their organizations harness the potentials (without all the problems) should be open-minded about new ways of using AI so employees don’t feel they need to go it alone.
Bajwa offers an example around AI hallucinations: Yes, hallucinations have gotten a nearly universal bad rap, but Bajwa points out that hallucinations could be useful in creative spaces such as marketing.
“Hallucinations can come up with ideas that none of us have thought about before,” he says.
CIOs who are open to such potentials and then enact the right guardrails, such as rules around what level of human oversight is required, will be more likely to have IT invited into such AI innovation rather than excluded from it. And isn’t that the goal?
Read More from This Article: 10 ways to prevent shadow AI disaster
Source: News