More and more companies are adopting a multicloud strategy. The goal? To maximize the flexibility and performance of their IT landscapes. They combine cloud services from different providers in a targeted way to exploit the advantages of different platforms. But without the right planning, complexity increases and can quickly lead to chaos. How do you strike the right balance between flexibility and efficiency?
For years, outsourcing IT processes to the cloud has been considered an essential step in the digitalization of companies. In addition to flexible and quickly available computing and storage infrastructure, the cloud promises a wide range of services that make it easy to set up and operate digital business processes.
However, competition among providers has led to an immense range of cloud services with a wide variety of characteristics and conditions being available on the market. At the same time, user companies are often insufficiently aware of the requirements for their own digital service that is to be operated with the cloud infrastructure. The following applies: the more precisely and in detail these requirements are defined, the better the potential that a multicloud strategy offers can be exploited.
IT conglomerates from many data sources and services
This has a huge impact on players in highly complex environments, such as the development of systems for autonomous driving or the energy networks of the future. Their business model stands and falls with the interaction of many data sources and services that are located in different clouds. But even the IT environments of companies in less complex industries often now resemble a conglomeration of local data centers, virtual machines, mobile devices and cloud services.
To manage their IT processes, many companies now work with a hybrid cloud concept that combines public and private variants as well as traditional on-premises systems. This raises the next logical step: multicloud. On the one hand, because the situation simply demands it. On the other hand, the simultaneous use of cloud services from multiple providers makes it possible to combine the advantages of the different offerings. In this way, user companies work with the solution that is best suited to their specific requirements.
Nine out of ten companies are pursuing multicloud
An approach that is catching on: nine out of ten companies surveyed worldwide already pursue a multicloud strategy, according to the Flexera 2024 State of the Cloud Report. They combine public cloud services from AWS, Microsoft Azure or Google Cloud, for example. Often, private cloud offerings from external service providers are also integrated or services are used that continue to run in the company’s own data center.
For example, when it comes to complicated calculations that are frequently executed in succession, AWS Lambda proves to be more efficient and economical than Azure Functions. However, if you work with Office 365 and other Windows-based applications, Microsoft’s Azure is the better choice. Workloads running on virtual Windows machines can be processed faster with it.
In the product development scenario mentioned above, for example, a Windows application in Azure triggers a Lambda service in AWS that performs the desired calculations. The results are stored in a database and accessed by the Azure application as needed. The database itself also runs in Azure so as not to slow down the application unnecessarily with long response times. The geographic region in which the services run also plays a role, as it can affect the performance, access times and costs of the respective application.
Growing complexity in the multicloud
It’s obvious that a multicloud strategy — regardless of what it actually looks like — will further increase complexity. This is simply because each cloud platform works with its own management tools, security protocols and performance metrics. Anyone who wants to integrate multicloud into their IT landscape needs a robust management system that can handle the specific requirements of the different environments while ensuring an overview and control across all platforms.
This is necessary not only for reasons of handling and performance but also to be as free as possible when choosing the optimal provider for the respective application scenario. This requires cross-platform technologies and tools. The large hyperscalers do provide interfaces for data exchange with other platforms as standard. However, to plan and control processes end-to-end in multicloud infrastructures, a unified interface for all applications is needed that is based on a cloud abstraction layer. This allows a company to move its workloads relatively easily, regardless of the specific requirements of individual cloud environments.
Benchmark autonomous driving
Anyone who wants to combine data from a wide variety of sources in IoT processes and analyze it quickly, as in the case of autonomous driving, for example, will not get very far without such flexibility, as the following example illustrates.
A supplier is developing an application to control the braking behavior of autonomous vehicles. To develop, test and validate the algorithms for this, terabytes and petabytes of data from a wide variety of sources are used. These include databases with images and videos of traffic lights in different countries, at different times of day and in different weather conditions, combined with data on the nature of the road surface and the tires of virtual test vehicles.
All this information has to be brought together from different clouds. The system must also be able to integrate further data sources as and when required, or to continue to use existing databases even if they are moved.
All this data must be analyzed extremely quickly in order to optimally adjust the braking force. Short latency times are therefore critical to success. An on-premises infrastructure is recommended for this. However, to accommodate the ever-increasing amounts of data, the project team is integrating AWS S3 and Azure Blob Storage.
Tools support multicloud strategy
Kubernetes
At the heart of every multicloud strategy today is Kubernetes, the de facto standard for container orchestration. It allows applications to be scaled, managed and deployed automatically — regardless of the cloud platform on which they are running. The open-source system is available in practically every public cloud, and most local cloud providers also offer Kubernetes.
There are several advantages to using Kubernetes: it ensures a high degree of flexibility when it comes to selecting the right cloud for the respective application. And it increases the availability and reliability of services. For example, Kubernetes can automatically redirect workloads to other providers if a provider fails or the connection is poor — or to make optimal use of flat-rate data volumes.
Terraform
Terraform, an open-source tool for Infrastructure as Code (IaC), is recommended for building an infrastructure for application environments. It allows you to define and manage resources such as virtual machines, networks and databases using declarative configuration files. Instead of manually creating and managing infrastructure resources, the IT or cloud architects merely describe the desired end state of their infrastructure and save it as configuration files. The configuration language HashiCorp Configuration Language (HCL) is used for the description.
Terraform then independently generates the desired state by creating, modifying or deleting the necessary resources. The whole thing can be set up as often as you like. A short command is all it takes to automatically copy an environment once it has been created. This is useful, for example, when setting up staging environments that are needed at different stages in the software development process. It is useful, for example, when developing cloud applications in highly regulated industries such as banking and insurance, aerospace, utilities and automotive.
At the same time, the declarative configuration files created with Terraform serve as a complete documentation of the infrastructure. What’s more, Terraform also monitors the status of the infrastructure and automatically detects and corrects deviations between the target and actual states.
Ansible
Ansible is another useful tool for efficient multicloud management. This open-source tool supports advanced infrastructure configuration and automation. It combines software distribution, ad-hoc command execution, and software configuration management.
Multicloud requires security
In general, anyone pursuing a multicloud strategy should take steps in advance to ensure that complexity does not lead to chaos but to more efficient IT processes. Security is one of the main issues. And it is twofold: on the one hand, the networked services must be protected in themselves and within their respective platforms. On the other hand, the entire construct with its various architectures and systems must be secure. It is well known that the interfaces are potential gateways for unwelcome “guests”.
To ensure security and performance, companies need a dedicated API concept and management for their multicloud, as well as a holistic view to identify weak points.
In addition to developing the specific skills needed for this, the use of a platform engineering team is recommended. Its task is to plan the architectures, select the technologies and choose the platform services that best suit the company’s applications. As a rule, it is also advisable to call in external multicloud experts, at least at the beginning.
AWS vs. Azure: Which platform is best for which application?
The following example of Microsoft Azure and Amazon Web Services (AWS) shows which cloud platform is best suited for which application. Both providers have almost identical pricing models and both offer services and functions as a service (FaaS) for practically every application. Nevertheless, some criteria can be used to make a qualified assessment and decision as to which platform is best suited for which application.
This involves questions such as: Do requests tend to arrive with a long time lag? Are there many or rather few requests to process? Do they arrive in a concentrated manner within a short interval? Are requests that arrive at the same time to be handled?
To compare both platforms in a well-founded manner, the use of a benchmark platform such as SeBS (Serverless Benchmarking Suite) is recommended. It offers the load scenarios cold & warm and sequential & burst — here is what they involve:
- Cold & warm: When the user accesses the FaaS service, the cloud provider selects an execution spot for the function, typically a container or a virtual machine. If no execution spot exists yet, the provider generates a new one (cold). If one is already available to process the request, it is used (warm).
- Sequential & Burst: Sequential sends one request after another. Burst always sends several requests simultaneously.
From the results of the benchmark tests: the average response time with Azure depends on whether many requests arrive at the same time and whether the function is warm. With AWS, on the other hand, it falls with increasing RAM — but is initially significantly higher. This makes it clear: as long as it is possible to keep the Lambda warm, AWS wins, otherwise Azure wins. Depending on the scale, the Lambda cools down again after about ten minutes.
For the application, this means if requests are to be processed at intervals of more than ten minutes, Azure has the advantage. If the requests arrive at intervals of less than ten minutes and the resource size can be estimated, AWS should be the choice.
The burst tests show that Azure is ahead for short load peaks and cold starts. However, if high loads are to be expected, the AWS Lambda is more performant. In general, the AWS results show greater consistency. So if you have strict requirements in terms of consistency of response times, you should go with AWS.
The detailed requirements and results of the benchmark test can be found in a white paper doubleSlash offers for download here: Amazon Web Services vs. Microsoft Azure.
Konrad Krafft is founder and CEO of the consulting and software company doubleSlash Net-Business GmbH. He studied general computer science with a focus on artificial intelligence and has been involved in the development of digital services for over 20 years, particularly in the area of business processes and software products. As an expert, he deals with the industrialization of software development and new digital business models.
Read More from This Article: Multicloud: Tips for getting it right
Source: News