When I first began advising organizations on AI implementation and adoption, I noticed a concerning trend: Organizational leaders were fixated on the hype cycle, yet lacked a clear understanding of why it mattered to their business or where it could have an impact. Boards and leadership teams asked their executive(s) responsible for data superficial questions, such as “what are we doing with AI,” with no connection, alignment or engagement to company strategies or goals.
But behind the C-suite and boardroom questions was a more fundamental disconnect. AI efforts weren’t grounded in business priorities. And worse, they weren’t connected to the people expected to enable or adopt them.
In one large enterprise, I witnessed firsthand how disjointed communication about AI led to employee disillusionment. Leadership poured millions into automation technologies without aligning initiatives to job design, reskilling paths or incentives. Meanwhile, that same disjointed internal messaging about AI left employees feeling demoralized and unmotivated to support or enable data and AI transformation. Gartner describes the employee experience as a “fear of the unknown” in 3 barriers to AI adoption. The friction between people, processes and systems that is constantly left unaddressed is the real problem.
This friction can be observed in:
- An increased willingness from leaders to invest millions in technology upgrades despite ambiguity on purpose
- A decreased willingness and active divestment from upskilling or changing legacy behaviors
Ask a manager if you can attend a conference or take a paid course to upskill so that you can acquire the relevant skills for an AI-enabled workforce and suddenly there’s no budget.
The demonstrated selectiveness to invest in technology and not the workforce sends a loud, clear message to employees. However, as noted by Gartner, the first barrier organizations will face in attempting AI transformation is skills or the lack of skills to successfully drive AI transformation. So why are we surprised that AI “isn’t delivering?” The truth is you can’t succeed without your people and that requires T.R.U.S.T.:
- Transparency. Is data openly accessible, clearly defined and easy to challenge?
- Relationships. Are cross-functional teams collaborating…or competing for control?
- Understanding. Do your people have the literacy and support they need to feel confident using data?
- Safety. Can employees ask questions, surface risks or say “I don’t know” without fear?
- Tone from the top. Is there transparency, training, intentional change management and incentives to adopt the change?
AI resistance isn’t technical, it’s tribal
Every time a headline drops about AI taking jobs, a CDAI or CIO somewhere dreads the conversations that follow. What I’ve seen across industries is that resistance to AI isn’t about the algorithms. It’s about power, protection and identity. For example, a client introduced a language model to help their compliance team reduce manual review. The tech worked, but employees pushed back hard. Why? Because no one had clarified how their work would evolve, only that it would “change.” McKinsey makes the following statement about how data leaders can help employees overcome their fear of the unknown:
“Senior leaders could counter employees’ prevailing fears of ‘replacement and loss’ with messaging about gen AI’s potential for ‘augmentation and improvement’ and its ability to significantly enhance the employee experience.”
When employees believe their role is threatened, they hoard knowledge, resist and reject process changes. In addition, failure to address those concerns ensures lost opportunities to engage, collaborate and collectively experience positive value from embracing AI.
Employees aren’t resisting AI because they don’t understand the technology; they resist because they fear being made irrelevant. Without psychological safety, AI adoption becomes a power struggle. And when that fear festers, teams lose the very collaboration and curiosity that makes innovation possible. Without a clear story, friction takes over, initiatives fail and organizations lose time, money, morale and productivity.
We have to build incentive structures that reward frictionless behavior: knowledge sharing, sharing data, aligning cross-functionally, admitting uncertainty and testing fast. That’s a cultural retrofit not a technical one.
Design for AI by starting with structure, not software
The reality is that many of your legacy constructs, including organizational structures and processes, will be impacted as you introduce AI into your organization. Large organizations, unlike AI-native startups, can’t take a lean-first approach because the strategic knowledge needed to invest smartly is embedded in the workforce, not just the executive suite. Designing for AI means doing the opposite of what most roadmaps suggest: it means starting with the organizational chart and business goals, not the model.
Why does this matter? In “AI will evolve into an organizational strategy for all,” Wired’s Ethan Wollic presents a compelling case that the future will bring:
“A surge in ‘AI-native’ startups that build their entire operational model around human-AI collaboration from day one. These companies will be characterized by small, highly skilled human teams working in concert with sophisticated AI systems to achieve outputs that rival those of much larger traditional organizations.”
In the same article, Wollic argues that, in contrast, large enterprises will derive value from AI transformation through workers and managers across departments who identify meaningful ways to use AI to enhance performance. This underscores the critical role of employees in surfacing opportunities, shaping implementation and ensuring adoption. Unlike startups that are built lean by design, enterprises must first unlock and integrate the operational intelligence that already exists within the workforce, but most AI strategies skip it entirely.
Diagnose and dismantle the real barriers to scale
In a recent engagement with a multinational client, we conducted what I call an “AI friction audit.” We mapped the places where AI initiatives had failed to scale, and what we found wasn’t surprising, but it was telling. The greatest barriers weren’t technical. They were structural and cultural: political competition between departments, unclear decision rights, lack of consensus on value and zero shared incentives for collaboration. These weren’t isolated pain points; they were system-wide design flaws.
The resulting conversations helped the leadership team understand what their roadmap had overlooked: that AI changes power dynamics, workflows and the very DNA of an organization. When your structures and incentives don’t evolve with the technology, the implementation breaks under the weight of unresolved tensions. Strategies that ignore these embedded challenges such as conflicted decision-making, misaligned priorities and functional silos, lack the foundational conditions required for success.
Yet many AI roadmaps still treat the org chart as fixed, decision-making as siloed, and value conflicts as someone else’s problem. Redesigning for AI means starting with the people and dismantling the legacy constructs that make collaboration optional rather than essential.
One of the biggest mistakes I see is designing AI roadmaps around the technology, then attempting to retrofit them into the business. That’s backwards. Joshi, Su, Austin and Sundaram described this dynamic in their article “Why so many data science projects fail to deliver” as the classic “hammer in search of a nail.” You can’t drive adoption through capability alone. You drive it through behavior. Cross-functional alignment, proactive data sharing, surfacing uncertainty early and rapid testing aren’t just tactics. They are behavioral signals of a healthy culture that’s ready to absorb change.
If your AI roadmap doesn’t start with people, it’s already off course
The uncomfortable truth is that many company cultures are barriers to AI adoption. The lack of investment in people, buy-in and alignment will continue to be an insurmountable friction point for organizations unwilling to confront the human side of transformation. Data leaders must stop seeing AI as a technical challenge and start leading like cultural architects because the organizations that will win with AI will be those that invest in behavior change and upskilling. That means sharing the vision early, involving your people in co-creation, upskilling for the future of work and rewarding behaviors that make adoption possible using the S.M.I.L.E. framework:
- Start AI roadmaps with a culture audit.
- Make behavioral metrics part of AI KPIs.
- Incentivize knowledge sharing, sharing data, aligning cross-functionally, admitting uncertainty and testing fast across silos.
- Lead with change management to drive alignment, accelerate adoption and ensure lasting impact, rather than treating it as an afterthought.
- Emphasize AI as an enabler of team augmentation, not a source of disruption.
When all else fails, just S.M.I.L.E.
This article is published as part of the Foundry Expert Contributor Network.
Want to join?
Read More from This Article: Stop chasing AI for AI’s sake
Source: News