The pace of change in AI is unlike anything I’ve experienced in my career and I’ve lived through the internet, smartphones and the birth of cloud computing. All of those caused huge changes in how we work, but they all felt measured.
With the internet, it came gradually, with bulletin board systems turning into closed communities like Compuserve and AOL and then dial-up access morphed slowly into the satellite, cellular and fiber services we have today.
The same happened with smartphones and cloud computing — we morphed these into our lifestyles over a decade or more and it always felt comfortable. Innovations come a generation of a year or so at a time and it has been easy to keep up and innovate.
AI hits different — and faster
If my new iPhone comes every year like a Christmas gift, AI is more like having a bucket of cold water poured over me every morning. Google just launched Gemini 3, which can access your whole corporate knowledge for research and that has changed the way I work, for good.
Well, actually, more like until next Tuesday, when something else will come along and change it again. And again. And again. AI is a sensual barrage of innovation, raining down on you all day. Generations of technology don’t come every year, they come all the time. And if you don’t keep up, you’re going to become irrelevant.
This acceleration is particularly acute for enterprise IT teams. Where once they managed relatively predictable environments with changes happening during planned cycles, today’s operations teams face constantly evolving hybrid infrastructures where traditional and cloud-native systems must coexist and integrate seamlessly. The challenge is not just technical. Teams that spent years mastering legacy systems now find themselves learning new architectures while maintaining critical business operations that can’t afford downtime.
The return of custom applications
Over the last 30 years, we’ve focused on packaged solutions — purchasing software from third parties built once and sold many times. It’s been a very efficient way to buy software and so long as there is an 80% fit to requirements, an organization can buy something that is good enough for a fraction of the price of what it would cost to build (and maintain).
AI has changed all that. You can vibe code an AI CRM integration in an hour, connect it into production and move on with your life. This is creating legions of zombie AI applications where no one knows what they do, how they work or who wrote them, for that matter (if AI wrote them, do they have an author?).
The enterprise platform challenge
This shift toward rapid, AI-enabled development is reshaping how organizations approach enterprise software. Consider modern business platforms like SAP’s Business Technology Platform (BTP), where companies can rapidly build applications using low-code/no-code tools and move legacy customizations from their core ERP systems into cloud-native environments.
While this approach offers tremendous benefits — elastic capacity, faster innovation, simplified upgrades — it also creates operational complexity that traditional IT management wasn’t designed to handle. Organizations suddenly find themselves managing three distinct operational domains:
- Cost optimization (ensuring cloud resources are used efficiently)
- Security compliance (monitoring configurations and access across distributed services)
- Performance management (tracking system health and connectivity between applications)
The result? IT teams struggle with fragmented visibility across environments that span decades of technology evolution. Making matters worse, traditional monitoring tools that served enterprises well for years, such as SAP Solution Manager and Focused Run, are reaching end-of-support in 2027, forcing organizations to rethink their entire operational approach during an already complex transition period.
What this means for AIOps in 2026
With all the knowledge we have about AI in 2025, IT operations seem like the last place that we’d look to plug AI in. Even Anthropic CEO Dario Amodei recently said, “The more autonomy we give these systems … the more we can worry,” — referring to the company’s analysis of how Claude.AI went rogue when trying to run a business.
We are definitely not going to see AI running IT operations, at least not in 2026, but we are going to see some specific use cases where AI shows its real strengths.
1. Work prioritization
Instead of a sea of alerts, we’re going to see worklists prioritized by impact (financial, social or otherwise) so that IT operations professionals can see what needs attention first. This will allow focused work and, of course, there are fallback procedures like escalations, which will allow human prioritization as well.
When managing hybrid infrastructures, legacy systems alongside cloud platforms, a single incident can cascade across multiple technology generations. AI-powered prioritization can analyze these interdependencies to distinguish between alerts that require immediate attention and those that can wait for scheduled maintenance windows.
The challenge intensifies during cloud ERP transitions. Organizations simultaneously manage ECC systems, S/4HANA instances and BTP applications during multi-year migrations. A job failure in on-premises ECC might affect BTP integrations that impact cloud ERP functionality, but without intelligent prioritization, these dependency chains aren’t obvious from individual alerts.
AI-driven prioritization understands business context, recognizing that month-end financial processing failures deserve higher priority than development environment issues, while identifying systemic problems across hybrid landscapes.
2. Intelligent root-cause analysis
When I need to know about something — almost anything — I dump as much information as I can find into the nearest LLM and ask it to provide a point of view. If it’s something important, then I run it through multiple LLMs and compare the output, then put all of it back into another LLM to consolidate perspectives.
AIOps tools can do this themselves and provide that output to the operator before they even start looking at the issue. These tools can also incorporate considerably more contextual information than the operator might know how to gather. They can search vendor documents and return suggested remedies.
The technical advantage is that in enterprise environments, root-cause analysis traditionally required expertise across multiple technology layers. AI can simultaneously correlate issues across application code, system configurations and infrastructure while referencing vendor documentation and known solutions, dramatically accelerating diagnosis time.
3. Change automation
While we should definitely not trust AI to prioritize, perform RCA and make changes autonomously, the final piece of the puzzle is change automation.
The AIOps tool can create a maintenance plan in collaboration with the operator, establishing an agreed workflow and set of actions to be performed. The automation engine can then apply that workflow during a maintenance window — no AI during maintenance windows, just deterministic automation.
This collaborative approach enables sophisticated tasks like provisioning development environments, automating system refreshes across hybrid architectures and managing complex deployment workflows, all executed with human oversight but without manual intervention during maintenance windows.
The path forward
2025 has really shown us the power of AI, but it’s also highlighted the risks of AI. James Cameron was prescient with The Terminator — we need to be mindful not to give AI too much control and power over the IT systems that run our businesses. But we also can’t afford to ignore the measurable benefits it can bring to operations.
This article is published as part of the Foundry Expert Contributor Network.
Want to join?
Read More from This Article: AIOps isn’t ready to run your data center — but these 3 capabilities are ready now
Source: News

