A few years ago, the term AI agent was unknown to most CIOs. It only showed up in a handful of academic papers far removed from the realities of the business world. Today, AI agents are quickly becoming part of the enterprise fabric, forcing CIOs to adapt to a new model of work where humans exist alongside these autonomous systems.
And adoption of AI agents is only accelerating. According to Dynatrace’s Pulse of Agentic AI 2026 report, 26% of organizations already have 11 or more agent projects underway. While roughly half of these initiatives remain in the POC or pilot stage, a growing number of companies have moved beyond experimentation into scaled deployment. AI agents are used primarily in IT operations and DevOps, with adoption also expanding in software engineering and customer support.
“Most agents today are large language models augmented with access to tools, systems, or data via APIs and controlled permissions,” says Thomas Serban von Davier, AI/ML research scientist at Carnegie Mellon’s Software Engineering Institute. “Effective adoption depends less on the technology itself and more on clear governance and access strategies developed in partnership with IT teams.”
In other words, CIOs are expected to take on far more responsibilities in the near future. “The role is evolving from system owner to workforce orchestrator,” adds Hrishikesh Pippadipally, CIO at accounting company Wiss. “CIOs will increasingly be responsible for designing hybrid teams of humans, agents, and vendors.”
While this road opens up opportunities, it also comes with a complex set of challenges. And CIOs will have to be smart about when and how they use AI agents, supervise them, and how they balance autonomy with accountability as these systems take on a more active role in everyday work.
When is a task a good fit for an AI agent?
A year ago, enthusiasm pushed organizations to try agents anywhere automation sounded interesting. But as they spread across the enterprise, CIOs have to become more strategic, focusing on the specific area where this technology can bring value.
“The current sweet spot for agentic AI is to automate repeatable processes that span multiple systems,” says Joe Locandro, global CIO at Rimini Street. By pulling data from these sources and automating manual steps, he says, agents can streamline workflows and reduce the effort required to complete routine tasks.
The repetitive work that requires a lot of enterprise context is where AI can shine, says Debo Dutta, CAIO at cloud computing company Nutanix. Agents, he adds, are also well suited for deeper research, where pulling together fragmented information can otherwise demand significant human effort.
When deciding whether to use AI agents or not, it helps to have clear, upfront criteria. Pippadipally typically looks at three things. “First, the task must be well-bounded with clear inputs, outputs, and success criteria,” he says. “Then the risk profile needs to be manageable — tasks that are advisory or preparatory are better fits than those involving final decisions or regulatory accountability.” Finally, he wants to know whether the task benefits more from speed, scale, or pattern recognition than from human intuition, judgment, or relationship context. “If a task still requires frequent exceptions, nuanced stakeholder judgment, or deep domain accountability, it stays human-led with AI support rather than being fully agent-driven,” he adds.
Usually, tasks that are a good fit for AI agents tend to share a few common traits. They involve high manual effort, carry low decision risk with AI supporting rather than replacing human judgment, deliver value across multiple people, teams, or processes, and rely on data that’s not highly sensitive.
By experimenting, CIOs can understand how AI works and where its strengths are. “So if a task is about doing research or exploring alternative solutions, we usually start with AI,” says Anton Vodolazkyi, CTO at software company Obrio. “If that’s not enough, we step in manually.”
Measuring the productivity of AI agents
When assessing AI agent productivity, traditional IT metrics capture only part of the picture. Solely focusing on cost misses critical dimensions of value, such as how reliable outcomes are, and how much human capacity is freed up.
“Instead of measuring AI agents purely on cost savings, we look at a combination of cycle-time reduction, throughput, error rates, and capacity unlocked for higher-value work,” Pippadipally says. For example, if an agent reduces a task from hours to minutes and frees up skilled staff to focus on analysis or client-facing work, and provides accurate results, that’s meaningful ROI.
Only considering speed would be a mistake, because faster outcomes mean little if agents introduce errors, amplify risk, or create downstream rework for human teams. This is why accuracy and reliability should remain central to how organizations assess agent performance.
But measuring the productivity and ROI of AI agents isn’t straightforward since they function primarily as tools that empower humans, and there’s a blurred line between what the agent contributes and what the human ultimately delivers.
When it comes to time saved, a decent threshold would be around 50% per use case, says Max Stukalenko, head of IT at app developer MacPaw. “As these systems mature, we plan to introduce more structured metrics, including quality, adoption, and scalability indicators.”
Oleksii Reshetniak, VP of IT and administration at software company Intellias, also cautions against declaring clear ROI too early in an AI agent project, noting that early performance can be misleading. “The first weeks are usually tuning and stabilization, so we treat early results as signal, not proof,” he says.
Main challenges for CIOs
As with any new technology, the introduction of AI agents brings a mix of promise and complexity, adding new layers to an already demanding CIO role. CIOs have to think about governance, risk, talent, and organizational change, and at the same time, they’re being confronted with a constant stream of new AI tools that promise competitive advantage.
“Given that technology and software are moving at such a fast pace, it’s a challenge for CIOs to select the correct platform and tools,” says Rimini Street’s Locandro. “Similarly, a lot of AI continues to be embedded in software solutions and, therefore, the challenge is to figure out what should be embedded versus custom AI.”
This proliferation of tools makes it harder to maintain architectural coherence and consistent oversight across the enterprise, increasing both complexity and risk. So one of the biggest challenges is to keep up with all these and decide on build vs. buy, Nutanix’ Dutta says.
Governance, accountability, and trust are other issues that come up when CIOs are asked about the challenges of deploying and supervising AI agents. “Defining clear boundaries for what agents are allowed to do, assigning ownership for outcomes, integrating agents into legacy architectures, and managing expectations versus actual capabilities all require careful attention,” says Teymuraz Bezhashvyly, CTO at hidden hint, a Swiss-engineered retail analytics company. “As a result, the CIO role is evolving toward stronger oversight of AI governance, risk management, and cross-functional alignment, rather than purely technology deployment.”
Even as AI begins to reshape day-to-day work, not everyone is prepared to change how they operate, and that resistance makes a CIO’s job even harder. Therefore, driving adoption often requires as much focus on people and culture as on technology itself.
“AI workshops, gamification approaches, and other similar activities work rather well, but it takes time and effort to change the way people do things, and encourage them to adopt these changes,” MacPaw’s Stukalenko says.
Hard lessons learned
Many organizations would’ve made slightly different choices if given the chance. The earliest agent deployments were often about curiosity or speed, and fundamental questions were left out.
“Based on our prior experience, my recommendation is to plan well, especially for the different agent types, their goals, and ways to evaluate if the agent is doing its job,” Dutta said.
A recurring theme among CIOs is the value of starting small. Successful teams tend to introduce agents first in internal use cases, learn what works in practice, and only then expand their scope. Dutta also points to the importance of architectural flexibility. Designing enterprise systems to support multiple agent vendors, he says, helps avoid overreliance on a single provider and makes it easier to adapt as capabilities, pricing models, and regulatory requirements continue to evolve.
Beyond technology choices, organizations also learned the hard way that accountability can’t be an afterthought. “Clear ownership and governance need to be established early, and agents should be treated more like junior team members than autonomous replacements,” says Bezhashvyly.
Feedback loops and monitoring should also be implemented from day one, he adds, and someone must always own the final output regardless of how much work an agent has performed. That oversight is most effective when it’s shared and reviewed by the right mix of technical and business leaders.
“Looking back, we would involve security and legal teams earlier in the process and avoid overengineering initial implementations before real usage patterns are well understood,” Bezhashvyly adds.
Others caution that the rollout of AI agents is as much a human challenge as a technical one, so having people with the right skills, judgment, and authority in place becomes critical.
”If we were to do it again, we’d invest earlier in change management and training, particularly around how managers supervise AI-driven work,” Pippadipally says. “Treating agents as part of the workforce and not just another application requires a mindset shift that’s just as important as the technology itself.”
Read More from This Article: How the growing AI workforce is changing the CIO role
Source: News

