Cloud strategies are undergoing a sea change of late, with CIOs becoming more intentional about making the most of multiple clouds.
But managing multicloud environments presents unique challenges, especially when it comes to the interoperability and workload-fluidity issues at the center of more deliberate — rather than happenstance — multicloud strategies.
“A lot of ‘multicloud’ strategies were not actually multicloud. They were mostly in one cloud with a few workloads in a different cloud. Or they were multicloud by accident, in which they acquired a company using a separate cloud or someone went rogue or had a preference due to skill set or pricing,” says Forrester analyst Tracy Woo.
“Today’s strategies are increasingly multicloud by intention,” she adds. “This makes a much heavier lift, though, for CIOs and their teams.”
CIO Tom Peck says wholesale food distributor Sysco is “absolutely a multicloud enterprise” and sees the advantages and disadvantages of multicloud clearly.
“On the good, you get the benefits that may be unique to each provider and can price shop to some degree,” he says. “But on the bad side, the ability to dynamically move compute from cloud-to-cloud and/or throttle up/down compute is overhyped.”
Interoperability and connectivity are key issues for the more than 80% of enterprises that have adopted a multicloud model, says Sid Nag, vice president of cloud services and technologies at Gartner.
“The reality is, stitching it together, instrumenting all that, is very hard, which is why you’ll often see multicloud adoption projects fail,” says Nag, who maintains that the current batch of connectivity technologies from cloud providers do not work well. “They never really talk to each other seamlessly to make multicloud work.”
These and other issues that come with operating in multiple — and likely hybrid — cloud environments challenges CIOs’ abilities to devise cost-effective strategies for leveraging each platform’s unique benefits while ensuring resiliency and long-term portability for their organizations — just as AI comes on as a compounding and complicating factor.
A market in need of more interoperability
Systems integrators and cloud services teams have stepped in to remedy some of multicloud’s interoperability hurdles, but the optimal solution is for public cloud providers to build APIs directly into the cloud stack layer, Gartner’s Nag says. A cross-cloud integration framework built of APIs could connect public clouds seamlessly in a many-to-many fashion, the research firm maintains.
Oracle is providing a different template. The company’s recently announced plans to provide deep, seamless connectivity from Oracle Cloud Infrastructure to AWS, after similar announcements for Microsoft Azure and Google Cloud, have raised eyebrows.
[ Related: CIOs rethink all-in cloud strategies ]
As part of the deal, Oracle would make its Oracle Autonomous Database available on dedicated infrastructure on AWS as Oracle Database@AWS, which will enable Oracle customers to take advantage of zero-ETL integration between Oracle Database services and AWS services, according to the company.
In addition, with Google and Microsoft, Oracle has interconnect agreements in place so that users are not charged for moving data out of Oracle Cloud and into Google and Microsoft, says Adam Reeves, IDC research director on PaaS for developers of modern and edge applications.
“It was one of those hell freezes over moment like you never saw it coming,” adds Rob Tiffany, IDC research director focused on private and hybrid cloud computing. “If we can put Oracle hardware and a subset of Oracle Cloud deep inside each of the hyperscalars, [customers] will get insane performance that they require for Oracle running SAP or whatever. It is a deeper level of integration.”
The hybrid cloud factor
A modicum of interoperability between public clouds may be achieved through network interconnects, APIs, or data integration between them, but “you probably won’t find too much of that unless it’s the identical application running in both clouds,” IDC’s Tiffany says.
True interoperability between public clouds may potentially be achieved through APIs or data integration between various public clouds, IDC’s Tiffany says, but “you probably won’t find too much of that unless it’s the identical application running in both clouds.”
[ Related: Private cloud makes its comeback, thanks to AI ]
The other means of interoperability is a hub and spoke integration between a customer’s on-premises private cloud and one or more public clouds to bring hybrid cloud computing to life, he says. Tiffany further explains that multicloud is generally just a more complicated form of hybrid cloud. He notes that private, dedicated network capabilities supported by each public cloud, including AWS Direct Connect, Azure ExpressRoute, Google Dedicated Interconnect, and OCI FastConnect, help facilitate the necessary integrations. Data center players that are “cloud adjacent” and work with those connectors include, for example, Equinix and Digital Realty, Tiffany adds.
HPE and Dell are top among the roster of private cloud vendors tapping into enterprise customer demands for interoperability as well, including for gen AI workloads on the cloud.
HPE, for instance, announced a private cloud solution with Nvidia called HPE Private Cloud AI that gives CIOs a turnkey solution for quickly deploying a private cloud with interconnects to the public cloud.
Networking vendors and AI startups are also taking aim at interoperability issues associated with multicloud.
Juniper, for example, is developing AI-powered software for orchestrating application connections across public clouds, collocation sites, and on-premises data centers, the company claims. The project, Cloud Interlink, is being incubated in its Juniper Beyond Labs.
“We have witnessed the emergence of highly distributed applications making the underlying network even more critical for providing seamless end-to-end user experiences,” says Raj Yavatkar, CTO of Juniper Networks.
AI startups are getting in the interoperability game as well.
[ Related: GenAI sticker shock sends CIOs in search of solutions ]
Stardog, an AI startup that counts Morgan Stanley, NASA, and Schneider Electric among its customers, recently announced a private GPU cloud facility powered by Nvidia in Ashburn, Va. The company is taking a data fabric approach to enabling enterprises to interconnect data across a wide range of SaaS, cloud, and on-prem data sources.
Multicloud is becoming a reality because big enterprise does not want to be locked into a single cloud or face huge fees to move workloads efficiently, says Stardog CEO Kendall Clark, acknowledging that the additional complexity, especially for AI, is real and expensive but maintains that demand will drive innovation for interoperability.
CIOs on multicloud’s complexities
Like many enterprises, Ally Financial has embraced a primary public cloud provider, adding in other public clouds for smaller, more specialized workloads. It also runs private clouds from HPE and Dell for sensitive applications, such as generative AI and data workloads requiring the highest security levels.
“The private cloud option provides us with full control over our infrastructure, allowing us to balance risks, costs, and execution flexibility for specific types of workloads,” says Sathish Muthukrishnan, Ally’s chief information, data, and digital officer. “On the other hand, the public cloud offers rapid access to evolving technologies and the ability to scale quickly, while minimizing our support efforts.”
Yet, he acknowledges a multicloud strategy comes with challenges and complexities — such as moving gen AI workloads between public clouds or exchanging data from a private cloud to a public cloud — that require considerable investments and planning.
[ Related: CIOs weigh the new economics and risks of cloud lock-in ]
“Aiming to make workloads portable between cloud service providers significantly limits the ability to leverage cloud-native features, which are perhaps the greatest advantage of public clouds,” Muthukrishnan says. Moreover, he adds, “more clouds mean more complexity, and spreading work between cloud service providers makes it difficult to build deep expertise, and in some cases, requires multiple specialized skillsets.”
That versatility of skills remains lacking today, according to Drew Firment, chief cloud strategist at Pluralsight, who claims fewer than 10% of IT pros reported in 2023 having extensive experience with more than one cloud provider.
“Some organizations are not at a level of cloud maturity and employee dexterity to successfully extract value from multicloud. Adding another cloud provider to the mix without the right talent, processes, and cloud infrastructure only makes the benefits of multicloud less attainable,” he says, stressing the importance of upskilling internal talent.
Ally’s Muthukrishnan agrees that maintaining public and private cloud environments requires a broad range of skills that are increasingly difficult to find.
“However, as private cloud capabilities mature, many skills extend across both environments, helping to mitigate some of these challenges,” he says. “Despite these hurdles, we believe that the benefits of a multicloud strategy far outweigh the complexities.”
Multicloud is also a part of American Honda Motor Co.’s IT strategy but in a more opportunistic way. Bobby Rogers, cloud transformation lead, says the automaker leverages public hyperscalers whenever possible.
“But we do not design our systems to run across multiple cloud platforms. We have not found a business case for doing so, and we think that it would add unnecessary complexity and risk,” he says. “We prefer to use best-of-breed SaaS solutions where possible and run our applications on the most suitable cloud platform.”
Honda is also evaluating on-prem/cloud-managed solutions for use cases that have network latency requirements. “These solutions, such as AWS Outpost, Azure Stack, or Google Anthos, allow us to bring the cloud to our data center and enjoy the benefits of both worlds,” Rogers says.
The multicloud calculus
Mojgan Lefebvre, EVP and chief technology and operations officer at Travelers, says a multicloud architecture not only offers enterprises the freedom to use best-of-breed cloud services but also the ability to negotiate better financial terms from each cloud provider.
“Different cloud providers offer various pricing models,” she says. “A multicloud strategy allows organizations to optimize costs by selecting the most cost-effective services for their needs.”
[ Related: CIOs sharpen cloud cost strategies — just as gen AI spikes loom ]
Lefebvre says Travelers’ approach to multicloud is intentional, with best fits for each workload being decided on a case-by-case basis — including keeping specific workloads in-house. She also notes that not relying on a single cloud provider reduces the risk of downtime and data loss, while also fostering better business opportunities.
“Access to a broader range of tools and services, including advanced AI and machine learning capabilities, can drive innovation and improve business outcomes,” she says. “However, managing multiple cloud environments can be complex and requires specialized skills and tools to ensure consistent security and compliance and effective integration of services and data.”
That often means applying vendor-supplied connectors to exchange data from cloud to cloud, interoperability management tools, and in many cases, pricey systems integrators to stitch it all together and ensure, above all else, that there is no data leakage.
Bob McCowan, CIO of Regeneron Pharmaceuticals, says taking a cloud-native approach can help ease some multicloud challenges.
“For those organizations that embraced ‘native cloud,’ the architecture and design allow for movement of the work between different cloud providers without significant effort,” he says. “In most cases this is part of a business continuity play but it’s good practice to avoid getting overly committed to any provider, as well as leaving the door open for pivoting to cloud providers that may deliver a capability unique to their platform.”
Given the pace of change in the cloud industry itself, that flexibility can readily pay off, McCowan says.
“Cloud providers are going to be leapfrogging each other and if the capability, price point, or global reach warrants it, organizations will need to have the agility to change things up,” he says. “The rapid growth in AI, with very specific use cases will also require organizations to plan for change or risk getting tied to the wrong technology or cloud provider.”
AI has become a game changer in many ways, and it is causing CIOs to rethink their cloud strategies. There is a lot to be gained from leveraging the latest tools in the public cloud and being able to defect as necessary.
Still, Max Chan, CIO of Avnet, says IT leaders ought not fret about building a multicloud architecture unless there is a well-defined need.
“Public cloud interoperability is increasingly important for gen AI deployment, but whether it is critical or more of a ‘nice to have’ depends on the specific use case and enterprise needs,” he says. “Enterprises with complex workflows that require integrating data and services from multiple cloud providers, such interoperability is essential for seamless data flow and service integration. However, for most other organizations that use a single cloud provider, interoperability might be less critical.”
And, Chan notes, the added complexity, as well as the potential costs associated with managing multicloud environments, might outweigh the benefits for many organizations.
Still, for those organizations embracing multicloud, all eyes will be on interoperability advancements. Oracle has taken a big step in that direction but only time will tell if enterprise demand forces cloud providers to build further interoperability directly into their clouds or risk losing customers.
In the interim, there are many tools and data integration strategies CIOs can use to make a hybrid, multicloud environment functional, says Nick Golovin, senior vice president of enterprise data platform at CData.
Amazon, for instance, advises customers to use homegrown services such as AWS DataSync, Glue, Athena, and CloudWatch to enable hybrid, multicloud interoperability. In a blog post this summer, AWS claimed Phillips 66 achieved multicloud interoperability by deploying its Managed Service for Prometheus but acknowledged AWS Professional Services was hired to make it work.
AWS also pointed to Elastic Container Services and EKS Anywhere, as well as AWS Outposts Family and AWS Snow Family as additional tools to enable interoperability.
“CIOs and data decision-makers can create a comprehensive data management strategy for the hybrid cloud environment by considering their environment as a data ecosystem and focusing on aspects such as integration, data quality, governance, master data management, and metadata management,” Golovin says.
“Cloud platform vendors often provide parts of these aspects, so understanding where the gaps are and leveraging third-party specialized tools for critical data management functions can help overcome the limitations of proprietary cloud ecosystems, ensuring seamless connectivity and flexibility,” he adds.
Read More from This Article: CIOs recalibrate multicloud strategies as challenges remain
Source: News