Trust has broken down faster than we can build it.
Yet executives still believe they have it — 90% say customers trust their companies. Only 30% of customers agree. The gap between how trusted we think we are and how trusted we actually are has never been wider.
That gap has always mattered. But in the age of AI, when we’re driving the most powerful technology shift of our time, it’s make-or-break. We’re asking people to exchange data and decision-making power with systems they don’t fully understand or control. And uncertainty is where adoption lives or dies.
AI learns from participation — and participation only happens when people trust the system. Without trust, there’s no data. Without data, there’s no learning or rapid innovation.
As a chief trust officer, I see this tension every day. Leaders across the business, especially CIOs, are under pressure to accelerate AI strategies while also protecting data and minimizing risk. The only sustainable way to do both is to ensure systems are trustworthy by design. Earning trust isn’t a “nice to have,” but rather the crucial enabler of AI innovation and growth.
When we build it deliberately, trust becomes a self-reinforcing loop. It fuels better data, better AI and better business outcomes. We’ve got to ace the trust equation.
I often remind my colleagues that we earn trust in droplets and lose it in buckets. Droplets show up in small, candid moments. Being clear about what a system can or can’t do, setting realistic expectations and remaining transparent even when it’s uncomfortable. They accumulate over time. Buckets are the incidents that shake confidence. In those moments, our response matters even more. Clarity, accountability and steadiness under pressure determines whether the trust curve continues upward or breaks.
In an AI-driven world, the droplets matter. They create the runway for the innovation and impact we’re aiming for.
Let’s take a look at how we do that.
Make the mindset shift about trust and technology
Every wave of new technology brings a measure of anxiety. AI just makes it harder to ignore.
We’ve seen the headlines: systems that hallucinate, make untraceable decisions or use data in ways that raise serious questions. Every week, CIOs and their teams give me a version of the same concern: Can we trust it yet? Boards are pushing them to move fast, but they don’t want to stake their credibility on something if it doesn’t feel solid.
That hesitation is reflected in the data, too. In a Stanford study of 1,500 U.S. workers, 45% expressed doubts about AI accuracy and reliability, a clear sign that trust is still shaky.
I understand that hesitation. It’s also why my role exists.
Trust used to be something we repaired after it was broken — after a breach, an outage or a headline. Now, it’s something we build continuously. My job isn’t to slow innovation; it’s to accelerate it by ensuring it can withstand market scrutiny.
That work starts early. I spend a lot of time with product and engineering teams understanding how data is handled, where humans stay in the loop and how we explain those choices to customers. Communication is as critical as code. I bridge customer expectations with our security practices and product innovations, translating technical choices into something customers can understand and evaluate — something they can trust.
Take data privacy. When a customer raises a question, I pull engineering in to break it down: What’s the real risk? What’s the right fix? How do we communicate it clearly? Our new redaction capability came directly from one of those loops. We listened to the issue, worked through it internally and turned the feedback into a feature customers can rely on.
Because if we can’t clearly show how we protect data and how the system learns, we can’t expect people to trust it. That’s trust-building in practice and it marks an evolution. Trust used to sit with compliance. Now it belongs in strategy.
When we lead with that mindset, transparency becomes our advantage. The clearer we are about how our systems work, the faster people are willing to adopt them. That’s how usage grows and innovation compounds. Trust turns from principle to performance, and momentum builds.
Consider a cloud platform that stores everything from your address to your employment history. You may never have evaluated whether you trust it — your employer made that call long before you knew your information would live there. Yet you live with the consequences. This is the nature of trust today: less about formal metrics, more about the quiet confidence people inherit without ever opting in.
That’s why I try to approach trust with professional empathy — putting myself in the shoes of customers and end users, asking: What would I need to feel my information is safe? Trust is inherently human. Earning it requires committing to that perspective every day.
Create the trust flywheel
We’ve already seen that when customers believe their data will be used responsibly, they engage more deeply. They use AI’s full power across our system, share richer signals and give us feedback that sharpens performance. Better outcomes build confidence, and that confidence drives even greater participation.
This is the flywheel:
Trust → participation → performance → credibility → more trust
It moves slowly at first, then faster with each turn.
The reverse is also true. If confidence falters because results can’t be explained or communication is inconsistent, hesitation creeps in. Data dries up, adoption dips and performance suffers. The flywheel slows.
This is why trust-building isn’t technical; it’s relational.
One thing I’ve learned is that trust rarely shows up in neat early indicators. It often reveals itself in hindsight. We all know when trust is lost because we feel those buckets spilling over. It’s harder to gauge whether we’re on the right path day to day. So I pay close attention to the subtle signals: when customers lean into tough conversations instead of pulling back, when the questions shift from “Can we trust this?” to “How fast can we roll it out?” Those moments tell me the droplets are adding up, even if we lose a bucket or two along the way.
Every explanation, every disclosure, every honest conversation about what AI can and can’t do adds weight to the positive side of the loop. When we embrace that, we stop waiting for perfection. We move with clarity and make it easier for people to believe in us and what we’re building.
Lead like trust is your advantage
The next decade of AI won’t only be defined by who builds the smartest technology, but by who earns the deepest trust. AI will keep evolving faster than regulation, and faster than most organizations are used to.
Every time someone chooses to deploy our platform and share their data, they’re making a judgment about us. Trust can’t live in policies, or product releases alone. It has to show up in how we make decisions, how we talk to customers and how we hold each other accountable.
For CIOs, this represents a real shift: treat credibility as a growth lever, not a guardrail. When we adopt that mindset, everything changes. Teams move with confidence. Customers lean in faster. Innovation compounds.
Trust is the runway for AI to take off.
This article is published as part of the Foundry Expert Contributor Network.
Want to join?
Read More from This Article: Why trust-earning CIOs unlock the fastest path to AI innovation
Source: News

