Kieran Gilmurray, CEO of KG & Co and chief AI innovator at TTG, was recently recognized among the Top 50 Thought Leaders on Agentic AI in 2025.
A veteran of digital transformation with experience as a CIO, CTO and chief AI officer, he has spent more than two decades leading enterprise automation, data and AI initiatives across global organizations.
A bestselling author and one of the UK’s foremost voices on AI strategy and intelligent automation, Gilmurray is known for translating complex technologies into measurable business outcomes.
His work focuses on helping companies move beyond experimentation to achieve true operational impact — balancing innovation, ethics and governance in equal measure.
In this exclusive interview with The AI Speakers Agency, Kieran Gilmurray breaks down why so many AI projects fail to scale, how leaders can bridge the gap between business and technology and what forward-thinking CIOs must do now to turn artificial intelligence from hype into hard results.
1. Many organizations invest heavily in AI, yet few achieve meaningful ROI. From your experience, what are the biggest reasons AI projects fail to scale?
KG: Most AI projects struggle not because of the technology, but because of fear, ignorance or poor execution. The biggest issue is that many companies start with AI for AI’s sake. They get caught up in proof-of-concept pilots (or what I call proofs-of-cost) with no clear business problem to solve, benefit to gain or measurable success criteria by which to track or judge an AI program.
Yet without strong alignment to the business’s goals, the AI project never leaves the lab.
Poor-quality data, lack of governance or training, limited or no executive support, no change management program, forgetting to engage legal, risk or compliance from the start, minimal collaboration between business and IT make matters worse.
The list of reasons why AI programs fail is well known. There are few surprises. Yet, I continue to be amazed and surprised at how many rush AI, get FOMO and ‘do AI’ or ignore the basics and don’t get value from their AI programs.
To scale successfully, AI needs the same rigor as any other strategic initiative: a clear vision, a business owner who’s accountable for results (ideally an executive), high-quality data and the right success metrics.
Start small, deliver value fast and then scale from proven impact to build trust and momentum.
2. How can enterprises move from experimentation to operationalization in intelligent automation and machine learning?
KG: The journey from pilot to production is where most organizations stall. They spend too long in experiment mode, chasing novelty instead of measurable and impactful business value.
To move forward, enterprises need to adopt a structured approach: Pick one or two high-impact business use cases that are achievable within an acceptable time frame, define success metrics and move them quickly into production to build confidence and digital muscle.
Create a cross-functional delivery team that combines technical expertise with business insight. Make sure you’re designing for scalability from day one. Factor in things like reusable models and code, standardized data pipelines and robust governance frameworks.
Operationalization is about discipline: Repeatable processes, continuous improvement, a business case that stacks up and clear ownership. When teams focus on value creation instead of endless testing or wasteful non-value-add shiny AI objects, AI stops being a distraction and starts becoming a part of how the business runs every day.
3. You often talk about “bridging the gap between business and technology.” What practices help leaders align AI strategy with real business outcomes?
KG: Bridging that gap means getting everyone to speak the same language. Too often, technical teams talk in terms of models and algorithms, while business leaders think in terms of customers and outcomes. Successful organizations bring these two worlds together.
Start by anchoring every AI project to a tangible business problem — something that affects customers, employees, revenue or efficiency. Build cross-functional teams that include both data scientists and business owners. Make collaboration routine, not an afterthought.
Leaders should also focus on storytelling, explaining how AI supports customers and the business strategy in clear, practical terms that everyone understands. When people see AI as a driver of business performance rather than a technical experiment, adoption and investment follow naturally.
As an aside, I do wish the business took more time to learn more about IT and everyone in the IT team learned to read a P&L. There is more than one language to business, but unless each tribe takes time to understand, then confusion, annoyance, resentment and waste ensue. Today, businesses don’t have the luxury to wait for internal teams to fight it out.
4. Data quality, privacy and governance remain persistent obstacles to effective AI. What frameworks or cultural shifts make the biggest difference?
KG: Technology alone won’t fix poor data practices. What makes the biggest difference is in winning businesses is culture. By that, I am treating data as a strategic asset rather than a technical by-product, an afterthought or never-thought. That means everyone in the organization, not just IT, takes responsibility for data ownership, quality, transparency, security and privacy.
Ask the right data questions first:
- What data do I have?
- What data do we have?
- How complete is it?
- Is it captured in the right form to be used by the AI?
Adopt proven frameworks such as DAMA or align your practices with global standards like the OECD AI Principles. These provide a strong foundation for managing data ethically and effectively.
Culturally, encourage collaboration between teams that typically don’t speak the same language or simply don’t talk — legal, IT, compliance and business operations. When they work together to maintain clean, secure and trusted data, AI projects are far more likely to succeed. Good governance doesn’t slow progress; it makes it sustainable.
5. As generative AI becomes more accessible, how should CIOs and CTOs decide which use cases to pursue and which to avoid?
KG: The temptation is to chase everything that looks shiny or new — today, that is anything generative AI-related.
But the most successful business leaders start by asking one question: What business problem are we solving?’
They then prioritize use cases that deliver measurable value, improve efficiency or create new customer experiences.
They avoid projects that are high risk, poorly defined, lack a valid business case, or are dependent on data they don’t control. In addition, they are particularly cautious about applications that touch sensitive information or where bias could have real consequences.
CIOs and CTOs should also resist the urge to deploy generative AI everywhere. It’s not a panacea nor a silver bullet. Focus on use cases that augment human intelligence, simplify complex processes or drive meaningful insight or automation. Pilot carefully, measure impact and scale only what works in practice.
6. There’s growing concern about AI ethics and regulatory compliance. How can organizations balance innovation with responsible AI deployment?
KG: Responsible AI starts with governance, not afterthoughts. Build ethical guidelines into every stage of the AI lifecycle all the way from data collection to model deployment. Make accountability and transparency part of your design principles.
Balancing innovation with responsibility doesn’t mean slowing down; it means building trust.
Employees and customers are far more likely to adopt and advocate for technologies they understand and trust.
Form ethics committees, encourage open discussion about risks and invest in responsible AI training so teams can identify potential ethical concerns early. Create a mechanism where their ethical queries can be rapidly and definitively answered.
If you build AI programs with integrity from day one, you can move faster later because you’re not constantly firefighting issues that, in hindsight, could easily have been avoided.
7. Automation is frequently seen as a cost-cutting tool. In your view, what does ‘intelligent automation’ truly mean for workforce transformation and productivity?
KG: Intelligent automation isn’t about replacing people; it’s about elevating them. It removes the repetitive, low-value tasks that drain time and energy, freeing employees to focus on creativity, strategy and customer relationships — the work that adds real value.
The real productivity gain comes from combining automation with human judgment. When AI handles the routine and humans focus on innovation, the organization becomes faster, smarter and more adaptable.
To make that happen, businesses must invest in reskilling their workforce.
Leaders must also show employees that automation is a partner, not a threat, in practice and deed. When people feel empowered rather than replaced, productivity and morale soar and the business benefits multiply 10x.
8. What role do cloud platforms and open-source ecosystems play in accelerating enterprise AI adoption today?
KG: Cloud and open-source ecosystems are leveling the playing field. Cloud platforms give enterprises scalable, flexible infrastructure without heavy upfront investment. They make it easy to experiment, deploy and scale AI solutions quickly.
Meanwhile, open-source ecosystems foster collaboration and innovation. They allow teams to build on existing models and accelerate time to value. Together, they remove much of the friction that previously slowed enterprise AI adoption.
For most organizations, this combination is the secret to speed. The goal isn’t to build everything from scratch, it’s to integrate, adapt and deliver value faster than ever before.
9. How can IT leaders foster collaboration between data scientists, engineers and business units to ensure AI solutions deliver sustained value?
KG: Collaboration begins with shared ownership. Data scientists and engineers bring technical skill; business teams bring context and purpose. Aligning these two perspectives ensures that AI delivers measurable, long-term business value.
Create integrated, cross-functional teams that operate around clear objectives, not silos.
Encourage frequent communication, shared metrics and transparent reporting. When business users are part of the design process, adoption happens naturally because they trust the outcome.
Leadership plays a key role here. Set the tone by rewarding collaboration and problem-solving over technical perfection.
The goal isn’t to build the most complex model, it’s to build the most useful one.
10. Finally, what emerging trends in AI and automation excite you most for the next five years — and how should enterprises prepare for them?
KG: The rise of agentic AI, systems that can reason, act autonomously and collaborate with other agents, is the next major 10x business leap forward. These models will fundamentally change how businesses operate by automating business decisions, not just tasks.
Equally exciting is the convergence of AI with other frontier technologies like quantum computing, edge AI and digital twins. Together, they’ll redefine what’s possible in terms of speed, insight, innovation, productivity, personalization and business agility.
Enterprises should start preparing now by investing in strong governance, ethical AI frameworks, data quality and workforce skill readiness. The future belongs to organizations that combine trust, transparency and innovation, not just those chasing the latest shiny trend because of FOMO.
This article is published as part of the Foundry Expert Contributor Network.
Want to join?
Read More from This Article: Why 80% of AI projects fail — and how smart enterprises are finally getting it right
Source: News

