Enterprise AI initiatives are producing uneven results as organizations struggle to convert widespread experimentation into focused, repeatable business outcomes. This was the throughline during the CIO 100 Leadership Live Los Angeles conference on April 16, at the Torrance Marriott Redondo Beach.
A consensus emerged around key constraints limiting enterprise AI’s contribution to transformation objectives. Among them are misalignment between AI initiatives and operating models, fragmentation and confusion around business ownership, the need for dynamic data governance, and, perhaps most importantly, the need for a corporate culture that allows the human element to keep pace with the rate of change in today’s agentic economy.
Leadership moments define transformational outcomes
Keynote speaker and serial entrepreneur Chris Dyer kicked off the conference with a key insight about IT leadership today: Execution is shaped by how leaders act in those few defining moments when priorities, risk tolerance, and accountability are tested.
“You’ll remember less than 1% of this year,” he said. “But that fraction will define how your team sees you, whether your best people stay, and whether your biggest initiatives gain traction or quietly stall.”
Leadership consistency establishes the conditions for adoption and trust, but teams absorb most about a leader in moments of difficulty and crisis, he said, explaining that rapidly changing business conditions often result in competing initiatives and shifting priorities that can weaken execution discipline and dilute operational impact.
Scaling AI requires enterprise-wide structural change
The conversation on culture continued during a session that featured three executives from PwC US: Danielle Phaneuf, Alok Mirchandani, and Roshini Rajan.
According to Mirchandani, most AI efforts remain confined to isolated use cases that do not scale into coordinated execution. “The struggle is not the technology,” he said. “It’s how you move off that use case mentality and actually drive scale and adoption.”
When implemented successfully, AI systems cut across workflows, requiring coordination between business context, data, and execution. That coordination does not sit within a single discipline. It requires individuals who understand the end-to-end process and can direct how AI is applied within it.
Moreover, depth in a single domain is no longer sufficient when outcomes depend on how multiple systems and processes interact. Because of this, broader roles are beginning to emerge to connect those elements in real-time, particularly as AI moves into production where decisions and actions must align across the organization.
“The nature of work is fundamentally changing,” said Phaneuf. “It’s not about mastering a single discipline anymore. It’s about understanding the big picture and orchestrating the right outcomes.” Hiring models, training approaches, and team structures are adjusting to reflect that shift.
To that end, performance measurement is aligning with output rather than activity. “It’s no longer about effort,” Roshini said. “It’s about the velocity and throughput you’re driving.”
The evolving economics of AI PCs
Enterprise AI deployment is forcing IT leaders to rethink where workloads reside. The discussion largely centers on cloud versus on-prem, with security concerns around intellectual property, data, and personal information driving renewed interest in private environments.
Device manufacturers are positioning the edge as a third option, introducing new performance and cost dynamics. Charles Thomas, HP’s North American AI channel business manager, discussed the edge versus cloud decision in those terms.
The underlying math is not difficult to follow, he said. Organizations that route every AI task through centralized on-prem or cloud infrastructure face compounding costs as adoption scales.
Local processing, enabled by neural processing units (NPUs), offloads a meaningful share of those workloads before they hit the network. Unlike a CPU, which handles general computing tasks, or a GPU, which excels at graphics and parallel processing, NPUs are purpose-built to accelerate AI and machine learning operations, Thomas said.
The performance baseline for edge AI processing is moving faster than most enterprise procurement cycles can track. Today, NPUs are effectively the market floor, with current commercial devices reaching platform-level AI throughput as high as 180 trillions of operations per second (TOPS) when dedicated NPU and integrated graphics acceleration are combined. As a result, the raw compute argument for moving workloads to the device is no longer speculative.
Transformation requires business ownership and customer proximity
Transformation outcomes reflect how well organizations align ownership, process design, and execution across business functions. When technology exposes gaps in coordination, it may say more about leadership, vision, and execution than it does about how well new and legacy technologies coexist.
Session moderator James Rinaldi, executive director at UCLA and former CIO at NASA’s Jet Propulsion Laboratory, framed the challenge in terms of scale and complexity, noting that transformation efforts, by definition, span multiple functions with competing priorities and shared dependencies.
His panelists agreed.
“These are not IT projects. They’re business projects with people,” said Keith Golden, former CIO at RGP. “You’ve got to bring the business into these conversations every step of the way.”
Systems behave according to the decisions organizations make about workflows and priorities. Anthony Moses, former CIO and global chief strategy and innovation officer at Yamaha Motor Finance, emphasized the executive responsibility that comes with implementation.
“Everything is like a blank Excel sheet when you buy a platform,” he said. “In reality, you own what the system will do.”
Purchasing a platform is about acquiring potential rather than capability. Translating one into the other requires people who understand the technology and the operational context it is meant to serve. It is the organizational equivalent of a craftsperson who knows not just how to use a tool but how to use it to accomplish a future desired state.
That distinction matters because the vision an IT leader brings to deployment rarely arrives intact at the employee level. A solution shaped around a specific outcome only delivers that outcome if the workforce it is designed to serve understands the intent, internalizes the logic, and has the capacity to work within it effectively.
AI value emerges in targeted use cases
The question of where AI is delivering on its promise moved from structural to operational during a panel focused on the business case for AI. The session drew on perspectives from three practitioners navigating that question in real-time: Bhupesh Arora, now at South Jersey Industries; Lucy Avetisyan, CIO at UCLA; and Feroz Merchhiya, CIO and CISO for the City of Santa Monica.
Their collective assessment reflected the broader tension of how to ensure AI is producing results in well-defined contexts, while organizations struggle to see across their own initiatives clearly enough to scale what is working.
“We have everything from pilots to production,” said Arora. “Some have a true business case.”
Customer service operations, as an example, are producing tangible results, with automation reducing call volumes and improving response times during peak demand while limiting the need for additional hiring. “There’s a hard dollar value to it,” he said.
Arora also pointed to a more ambitious application within the company’s wholesale natural gas trading operation, where manual Excel-based workflows currently support buying, selling, and scheduling decisions across roughly $70 million in annual business activity. The objective, he said, is to automate that process end to end, from generating trading suggestions to executing trade scheduling.
“It’s real business automation with AI,” he said, moving the technology from back-office efficiency to core revenue-generating operations, where the potential returns are considerably higher.
Merchhiya agreed that efficiency alone should not define a sound AI business case. “Not every business case is about cost saving,” he said. “Not every business case is about bottom-line delivery. Each one has its own unique lens that you have to apply.”
It’s an important perspective for a city government navigating fiscal pressure while managing everything from public safety infrastructure to transportation networks.
For Avetisyan, AI adoption at an institution like UCLA requires thinking well beyond functional improvement. Workforce readiness, student preparation, governance, and ethical use all carry their own accountability structures and timelines for measuring return.
“If we’re going to just take and put AI on existing processes, existing work, that’s the worst thing we could do,” she said. Instead, business processes must be rethought to make the most of AI.
AI adoption constrained by trust in data
The relation of AI performance to the quality and reliability of the underlying data was the central theme of a panel that brought together Ilker Taskaya, field CTO at Perforce Software; Marivi Stuchinsky, VP of software engineering at Experian; and Chris Fodera, senior IT director at Qualcomm.
Their collective experience across financial services, semiconductor manufacturing, and enterprise data protection illustrated how differently the data problem presents itself depending on context.
“Without it, you’re just guessing,” Fodera said.
The observation carries particular weight given his background managing data across Qualcomm’s automotive and industrial IoT divisions. In his first two years supporting the automotive unit alone, the volume of sensor data exceeded everything Qualcomm had accumulated in the company’s prior 38 years.
The challenge, though, is to understand which data to trust in which context. Qualcomm’s autonomous driving models trained on San Diego road data perform reliably in Los Angeles. They do not perform reliably in India, where road infrastructure differs fundamentally.
“If you dirty down your model with data that doesn’t belong in it, it’s going to key off things you don’t want it to,” Fodera said.
Stuchinsky described Experian’s approach as drawing a deliberate boundary between consumer-facing data, which does not flow into AI pipelines, and internal operational data, which does. Rather than cleaning petabytes of legacy data before deployment, her team uses a vectorized database architecture that validates data as it is needed.
“This way, when AI points to it, it’s already clean,” she said.
Taskaya contextualized the broader stakes from the vendor side. AI has expanded the attack surface and raised the cost of exposure, while inverting the relative value of data and the applications sitting on top of it. As software commoditizes, data becomes the differentiating asset. As a result, protecting and governing data moves from being a compliance function to a source of competitive differentiation.
Investment shifts toward focused AI applications
A venture capital panel moderated by Julie Bort, editor at TechCrunch, brought a market-level perspective to the day’s recurring question about where AI is delivering value. The panel featured Chiraag Deora, principal at Greycroft; Maddi Holman, co-founder and general partner at Daring Ventures; Rob Smith, partner at M13; and Kesar Varma, partner at Upfront.
The session started with a provocative, even skeptical, opening question from Bort, who challenged the panel by characterizing LLMs as “the worst intern I’ve ever hired” and questioning whether AI would ever deliver on the transformational promises Silicon Valley has attached to it.
“AI is not making things better, yet,” Smith said. “At this point it’s about making them faster and more efficient.”
VC investment today, all agreed, is concentrating around defined use cases where outcomes can be measured and scaled. Healthcare revenue cycle management, data security, pharma sales compliance, and government services automation drew attention as areas where structured workflows and clear performance metrics are allowing AI to find traction.
Still, durable competitive advantage requires either owning proprietary data or controlling distribution, Holman said.
“You can’t just be displaying data for someone that could have it themselves,” she said. Smaller, purpose-built models drew consensus as the more defensible next wave, particularly in regulated industries where hallucination risk carries real consequences and institutional knowledge that walks out the door with departing employees can now be encoded and retained.
On risk posture, Smith’s advice reframed how CIOs might approach vendor evaluation.
“Invest in the people building the software, not the software itself,” he said. “The software will become obsolete in 18 months. If you back the right team with the right vision, they will continuously adapt.”
For CIOs, the panel’s experience translates into three posture shifts. On risk assessment, start with the team, not the technology, and ask founders directly about runway and five-year vision.
When it comes to vendor selection they recommended mapping procurement requirements before a pilot begins so both sides understand what a path to contract looks like. On roadmap planning, treat early-stage engagement as a design partnership rather than a vendor evaluation.
In the final analysis, they all agreed, the risk of inaction, is no longer smaller than the risk of engagement.
Read More from This Article: CIO 100 Leadership Live Los Angeles: CIOs confront the AI execution gap
Source: News

