When the AWS US East 1 region went dark in October, it created a ripple effect that reached far beyond cloud workloads. Atlassian tools, home monitoring systems, communication platforms and even school websites became unavailable within minutes. None of these failures resulted from an attack nor reflected a lack of backup architecture. They represented a growing challenge for CIOs stemming from the unseen dependencies that now sit underneath critical business functions.
For many organizations, the event felt like a cyber incident even though it wasn’t, but it raised a difficult question for CIOs about how to prepare for a disruption that lives outside your infrastructure, yet carries the same operational and reputational consequences as a security breach.
That has been top of mind for Yogs Jayaprakasam, the chief information, technology and digital officer at business payments and financial tech services provider Deluxe. The 110-year-old company relies on a mix of cloud platforms, SaaS products, and enterprise systems, and Jayaprakasam says the AWS outage reinforced something he’s been observing for years. Following the event, he traced more than a dozen public cloud outages across the three major hyperscalers in the past 12 months, each lasting six hours or more.
Beyond strong cloud architecture, “Preparedness is the real differentiator,” he says. “Even the best technology teams can’t compensate for gaps in scenario planning, coordination, and governance.”
The convergence of operational and cyber failures
Jayaprakasam’s perspective began to shift as he studied recent incidents, from the Meta BGP misconfiguration in 2021 to last year’s widespread CrowdStrike update failure. He explained that although these events weren’t cyber attacks, the customer impact, communication challenges, and recovery complexity mirrored major security breaches. “We treated disaster recovery and cyber response like two different problems,” he says. “But when something like a bad update takes down millions of machines, it behaves exactly like a ransomware event.”
Within Deluxe, disaster recovery tests historically focused on applications the company controlled, while cyber tabletops focused on simulated intrusions. The AWS outage exposed the gap between those exercises and real-world conditions. Shifting its applications from AWS East to AWS West was swift, and the technology team considered the recovery a success. Yet it was far from business as usual, as developers still couldn’t access critical tools like GitHub or Jira. “We thought we’d recovered, but the day-to-day work couldn’t continue because the tools we depend on were down,” he says. That experience reshaped how his team defines resilience.
Seeing the full dependency chain
In response, Deluxe began mapping system dependencies in far greater detail. One of the simplest but most important changes, Jayaprakasam says, was adding a question to every SaaS intake process about where the application actually runs, since knowing whether a tool operates on AWS West, Azure East, or another region allows the team to simulate real failure scenarios, and plan coordinated recovery steps with vendors.
“It’s a shift in data collection that makes you better prepared,” he says. “Once you see which SaaS platforms share the same cloud region, you start to think very differently about how the business comes back online during an outage.”
This visibility allows Deluxe to extend tabletop scenarios beyond the boundaries of owned infrastructure. The company now includes operational failures, cloud region outages, and third-party disruptions in its joint cyber and DR exercises, rather than running them in separate tracks. This unified approach, called ResilienceONE, delivers a more realistic understanding of how failures propagate across the ecosystem, strengthening both resilience and preparedness.
Coordination over investment
Spending more money on additional infrastructure or redundant cloud providers isn’t the answer. After all, following every major outage, Jayaprakasam says sales pitches arrive promising zero downtime if organizations spend more on new platforms. He pushes back on those claims. In a well-architected hybrid cloud setup, he says resilience is more often a coordination problem than a spending problem, and distributing workloads across two cloud providers doesn’t guarantee better outcomes if the clouds rely on the same power grid, or experience the same regional failure event.
He argues that the more effective approach is strengthening coordination between IT, cybersecurity, business continuity, and third-party vendors. “The real question is whether you have the right contacts, process, and response patterns in place,” he says. That coordination includes creating a single view of dependencies, practicing joint response exercises, and ensuring that vendors can be reached and escalated during an incident. You can gradually develop advanced dependency mapping of your assets across boundaries to simulate potential impact radius scenarios.
In addition, Jayaprakasam believes that many organizations already have strong response processes but they simply apply them too narrowly. Legal and compliance teams, for example, have well established playbooks for cyber incidents, and those playbooks can also apply to operational disruptions.
“You don’t need to reinvent the wheel,” he says. “You need to make the process holistic and complementary instead of creating separate paths.” So when advising peers, Jayaprakasam recommends starting with the scenarios used in tabletop exercises. He encourages CIOs to ask whether they’re testing for operational disruptions and third party outages in addition to cyber attacks, and whether those exercises reflect the actual dependencies that keep the business running. Plus, he suggests reviewing where existing playbooks can be unified rather than creating new ones, focusing on coordination across teams and partners instead of new investments.
The leadership challenge: motivating the work no one sees
Jayaprakasam is candid about the cultural challenge that comes with resilience work. He says that in an era dominated by AI initiatives and digital strategy, the toughest part of leadership is motivating teams to focus on the routines that seem unexciting but are critical to a company’s ability to recover during a crisis. “Most of the work that prepares you for these moments is boring,” he says. “But that boring work is what changes the business outcome.”
He believes CIOs must reward that discipline, not just the innovation work that receives more headlines. And he’s clear about the stakes. AI may define the long-term value of a technology organization, but reliability defines its credibility in the present. “Your ability to be seen as a strategic partner can be taken away in a second if the systems aren’t available,” he says. “Striking the balance is what matters.”
Read More from This Article: What the AWS outage taught CIOs about preparedness
Source: News

