When we hosted our first-ever IT leadership conference, FUSION ’25, I expected to spend most of my time thinking about AI. Instead, I found myself thinking about people.
Moderating a panel of CIOs that day changed the way I looked at tech leadership.
It wasn’t a conversation about software or infrastructure or even innovation. It was about what happens when machines start learning and how we, as CIOs, must learn alongside them.
Because here’s the thing: AI doesn’t just upgrade your systems. It rewires your culture, your expectations, your patience and your definition of progress.
The real transformation isn’t happening inside the algorithms; it’s happening inside us.
Here are seven lessons I took away from that room. Not as a technologist. As a student of leadership!
1. Clarity is the new speed
Every organization today is under pressure to move fast on AI. But as Chad Ghosn, CIO of Ammex Corporation, reminded us, “Speed without clarity just creates noise.”
The teams that move with confidence aren’t the ones automating the most, but the ones that can clearly articulate why something matters.
Before every AI experiment, they ask:
- What decision does this help someone make faster?
- What problem does it actually solve?
- How will we measure if it’s working?
It turns out that clarity is the real form of speed. Because when everyone knows why they’re running, no one needs to be told how fast to go.
2. Culture learns slower than code
Every technical leap has a human half-life — the time it takes for people to catch up emotionally and behaviorally.
Chad shared how his team held open “AI office hours,” not to show off tools, but to help people talk through what they were afraid of. It wasn’t training; it was trust-building.
That struck me deeply. We like to talk about digital transformation as a technical project, but it’s actually a cultural one. Because no matter how good your model is, belief can’t be deployed but earned.
3. Trust takes repetition, not rhetoric
“AI handles 98% of the task, but I still like a person to press ‘Send,’” Chad joked.
That last 2% (the human check) is where confidence lives. It’s not inefficiency; it’s assurance.
You can’t tell people to trust the system. You have to let them watch it earn their trust, one accurate response at a time.
That’s leadership in this new world: managing the space between almost right and certain.
4. Data is everyone’s responsibility
When we talk about AI, we often forget that intelligence is only as good as the information it learns from.
Venki Rangachari, CDO at HPE Networking, shared how his teams appoint data stewards– people who are accountable for the quality of their datasets–across departments and not just the infrastructure they sit in.
Mark Gill, Senior Director of IT at Zuora, echoed this, pointing out that data can’t live in isolation. The minute you make it someone else’s problem, it becomes everyone’s bottleneck.
A recent Gartner study also points out how 34% of IT leaders in low AI maturity organizations say that data availability and quality are among the top hurdles for them in implementing AI.
Clearly, AI doesn’t fail because it’s complex; it fails because it’s missing context.
Good data is less about accuracy and more about agreement. When everyone defines truth the same way, machines can finally learn something meaningful.
5. Guardrails are the architecture of trust
Innovation needs freedom, but freedom needs boundaries.
Venki spoke about how HPE developed an internal framework termed “ChatHPE” to ensure that AI runs within trusted environments, alongside an ethics committee that reviews new use cases.
It reminded me that guardrails aren’t constraints. They’re proof that we take innovation seriously.
In leadership, too, the goal isn’t to remove friction; it’s to define where it should exist.
6. The CIO’s job description just changed
As Mark Gill put it, “My job used to be just managing systems. Now, I’m managing systems with reasoning.”
That line has lived rent-free in my head ever since.
The modern CIO isn’t just the keeper of infrastructure; they’re the interpreter of intelligence.
They not only ensure that systems are connected, but also the decisions.
In many ways, they’ve become the organization’s conscience, the ones who must decide not just what AI can do, but what it should.
7. Experience is the only metric that matters
For all the dashboards and KPIs, the conversation kept circling back to one thing: employee experience.
Patrick Young, Senior Director of IT at Skydio, described how his team deployed AI agents as the first line of support. The result wasn’t just faster resolutions — it was calmer, more confident employees.
“When systems quietly work,” he said, “people feel seen.”
That line captured the heart of it for me.
AI shouldn’t feel like a replacement for people. It should feel like a quiet, invisible partner that gives them time, focus and clarity back.
As the session ended, I realized that AI isn’t teaching machines to think; it’s teaching leaders to rethink.
What we’re really learning is how to lead in uncertainty, how to slow down before we speed up, how to ask sharper questions before demanding instant answers and how to lead with empathy before leading with problems.
Leaders who chase speed without clarity will find themselves buried in chaos. Those who ignore people will end up with tools no one wants. And those who confuse efficiency for progress will miss the point entirely; that the purpose of intelligence, artificial or otherwise, is to make us more human, not less.
As I think about the road ahead, I envision a new kind of organization that balances purpose and innovation with introspection. A place where machines handle the repetition and people reclaim the reflection.
Technology may learn faster, but wisdom is still a human pursuit.
When machines learn, so must we.
This article is published as part of the Foundry Expert Contributor Network.
Want to join?
Read More from This Article: AI isn’t teaching machines to think — it’s teaching leaders to rethink
Source: News

