Along with the publicized benefits, gen AI brings new risks to businesses and their customers. Just over half of organizations using AI report at least one instance of a negative consequence, according to research from McKinsey, with nearly one-third of respondents mentioning issues stemming from AI inaccuracy.
Hallucinations aren’t the only challenge associated with implementing gen AI. In addition to commonly cited concerns, such as business value, security, and data readiness, Gartner suggests organizations may overlook critical blind spots, including shadow AI, technical debt, skills erosion, data sovereignty, interoperability issues, and vendor lock-in.
The combined risks of inaccuracies, data leaks, and other areas mean regulators around the globe are rushing to tighten rules and regulations to ensure businesses operate within strict guidelines, and their customers are protected. The most high-profile AI legislation is the EU’s AI Act, which is a comprehensive risk-based framework for AI and part of an evolving regulatory landscape. Legal firm Bird & Bird has developed an AI Horizon Tracker, which analyzes 22 jurisdictions to illustrate AI regulations, including laws, guidelines, and actions. The tracker presents a broad spectrum of regional approaches, from no regulations at all to rigid material requirements.
So digital leaders tasked with steering AI initiatives across this environment face a potential governance minefield. The regulatory bind is such that some businesses question whether experimenting with AI is a risk worth taking.
Research from manufacturing specialist RS, for example, found that AI and ML are only a priority for 34% of senior leaders in industrial sectors. Mike Bray, VP of innovation at the company, says the conclusion is simply a degree of caution around adoption.
However, while governance could be seen as a barrier to innovation, experts suggest compliance with rules and regulations creates useful ground rules that can help guide AI explorations in the right direction. In fact, some experts believe the deployment of AI can help CIOs and their business peers manage risk.
Is governance really a barrier to innovation?
Ian Ruffle, head of data and insight at UK breakdown specialist RAC, acknowledges that digital leaders must meet compliance head-on. His organization runs an AI governance forum, involving information security and other LOB specialists to ensure the company focuses on the right areas.
“I think you’ve got to feel your way through the challenge,” he says. “We don’t want to be a business that’s scared of AI. You’ve got to embrace its potential.” Ruffle says the key lesson from his organization’s data-led explorations is that effective governance is a team game. Work together to consider how the rules and regulations can guide your AI implementations.
“Success is about having the right relationships and never trying to sweep issues under the carpet,” he says. “If you’re in a leadership role, and looking at a new piece of technology, your first thought should be to involve the right kind of governance around what you’re doing and the way you’re processing data. Your change processes must be carefully monitored so you don’t do things wrong.”
Bray at RS also believes governance should guide AI implementations. He says the company is in a similar position to other businesses and must navigate a mix of opportunities and risks. Acknowledging that mix means strong governance is critical to ensuring RS uses AI in ways that benefit its customers, suppliers, and internal teams, all while mitigating risk.
“Our learning is that having the right foundations of governance, security, and compliance is essential to use AI effectively, as is having a clear understanding of the problem or opportunity to address before determining whether AI is the right solution to deploy, rather than being led by the technology itself,” he says.
What’s crucial to recognize, suggests Charlotte Bemand, director of digital futures at Hottinger Brüel & Kjær, who spoke at the DTX 2025 event in London in October, is that managing governance in an era of innovation involves a careful balance. Rather than being a set of fixed rules and regulations, governance evolves. Smart business leaders ensure there’s a tight match between guardrails and frameworks and organizational maturity.
“In my business, we have highly regulated end markets that are super-sensitive, and we have a much higher degree of compliance activity in that space,” she says. “There’s also a whole range of markets and customers, where they’re expecting rapid innovation and agility from us, and we have to balance both of those things.”
Compliance as a route to exploration
The key thing to recognize, says Shruti Sharma, chief data and AI officer at Save the Children UK, who spoke at the same event, is the fine line between setting foundations and encouraging innovation.
She says governance often comes with a bad reputation. There’s a common perception that compliance involves bureaucracy and lots of administration, but governance doesn’t need to be a 100-page rulebook or a set of policies that people keep referring to, she says. The best way to manage governance is to establish clear boundaries.
“In addition to embedding personas and role-based access, we allow people to have sandbox environments to explore, but they also have clarity,” she adds. “For me, clarity is about boundaries and putting the right definitions in place. People can then explore and experiment within a remit that’s also safe for the organization.”
In short, when governance is embedded within the innovation process, rather than being seen as an additional obstacle to overcome, organizations can use compliance as a structure to explore the potential of AI safely and effectively.
Paul Neville, director of digital, data, and technology at UK agency The Pensions Regulator, says that joined-up reality will surprise some professionals. “People tend to talk about risk and opportunities as two separate things, but it’s really one long continuum,” he says. “What’s a risk can be an opportunity, and what’s an opportunity can be a risk. Both things are true.”
Neville spoke with an unnamed CEO recently who was so absorbed in the problems of today that they couldn’t imagine a completely different world where, by exploiting automation and AI, things could be done differently. “They were so focused on risk that they couldn’t move forward,” he says. “And that was quite sad.”
The key is having the vision to imagine something different, says Neville. The best leaders paint a picture of a better tomorrow, highlight potential risks, and provide mechanisms to manage those concerns. To help create a clear vision, Neville and his colleagues have established an AI advisory council.
“That council will have external and internal members, but will be chaired by our COO rather than by me to give it independence,” he says. “And the council will also mean we’re able to properly kick the tires of the things we’re doing, and take an ethical view. It challenges us about opportunities so it’ll consider governance and innovation.”
Using AI to manage risk
Art Hu, global CIO at tech giant Lenovo, says there’s no single way to manage the balance between governance and AI as contexts and responses vary across sectors and companies. However, one potential route to success is using AI to manage risk. Hu believes a tactical investment in AI will produce dividends for CIOs who manage governance.
“One of the strengths of gen AI is suggesting lots of different sources, way more than a human can, and making recommendations with some grounding of what you should do,” he says. “Get your approach right, and AI tools can improve your risk assessment, mitigation, and management functions.”
That’s certainly the case for Dave Roberts, VP of environment health safety at manufacturing, construction, and industrial services conglomerate The Heico Companies. He helps the organization minimize risks and potentially serious incidents across all work sites, ensuring regulatory requirements for each region are met. What he encounters in this role is an ever-increasing raft of guidelines and rules.
“I deal with a lot of regulations, and it seems like those keep growing,” he says. “Part of my job is to consider how we keep up with all this change. So anytime I can find a way to simplify the world and manage through the regulations, then that’s potentially useful.”
And that’s where AI comes in. Roberts recognized that Heico needed a system to reduce the effort involved in managing risk globally. He scanned the IT market and discovered that Benchmark Gensuite’s PSI AI Advisor, which uses AI to extract and summarize details from incident reports, could provide a solution to the company’s intractable challenge of managing major risks.
By using insights gleaned from the AI assistant, Heico has experienced a significant reduction in workplace incidents across its facilities, helping to reduce compensation costs by 60%. Roberts says these results change the conversation about the link between AI and governance.
“Business leaders are worried about the big stuff,” says Roberts. “This technology gets you to what’s important. Our success builds credibility with the leadership. They know where there are bigger risks because they have the insight at their fingertips.”
Read More from This Article: The trick to balancing governance with innovation in the age of AI
Source: News

