As cloud adoption has continued to rise in recent years in response to businesses’ need for agility, cost savings, innovation, and digital transformation, organizations are faced with new challenges and opportunities that impact business operations.
Presenting at CIO’s recent Future of Cloud event, Dave McCarthy, research vice president, cloud infrastructure services at IDC, shared IDC’s worldwide cloud predictions for 2022, focusing on four predictions that he believes will be significant for companies in the next one to three years.
What follows are edited excerpts of that presentation. For more IDC insights, as well as charts from the research, watch the video embedded below.
On application modernization:
By 2024, the majority of legacy applications will receive some modernization investment, with cloud services used by 65% of the applications to extend functionality or replace inefficient code.
So what does this mean? What it means to me is that applications—as they go through a modernization process—will take different forms. And I think just like a lot of things in life, we always want things to be sort of absolute, like everything is going to be modernized. But in reality, when you look at how companies think about this, it is always a spectrum around the applications that they think are ripe for full modernization and which ones might have some smaller step along the way….
[T]here are some [legacy applications] that may never take a full modernization process. However, it doesn’t mean that they can’t take advantage of some of the newer technologies. And so what you will see is companies wrapping around legacy applications things like machine learning and AI services. So that you are taking the application you already have and leveraging that data to become more intelligent, to be able to make quicker, faster decisions without necessarily disrupting that code base. Or in other cases, you might see somebody that’s looking to bring a new user interface or a mobile app design into it, to be able to augment existing functionality without, again, retooling all of the backend.
Of course, certainly, people who are going down the modernization process are looking at the different tooling that is available. Things like container-based code, looking at more API-driven automation, because of the benefits it provides. Things like being able to react more quickly or do more sort of granular updates to applications. Or, quite honestly, just develop new features faster.
And so as companies look at building their agility, we are going to continue to see an increased amount of app modernization across all parts of the business.
On dedicated cloud services:
By 2025, in response to performance, security, and compliance requirements, 60% of organizations will have dedicated cloud services, either on premises or in a service provider facility.
Now, the dedicated cloud concept is tied very much to either looking at it from a hybrid perspective, but even more so from an edge computing perspective. Certainly, I think there were people that thought that anything and everything was on its way to the public cloud. But if you really look at how cloud vendors approach it now, they have taken a different approach. I think they have realized that there are certain workloads—or certain business requirements—where the cloud just isn’t as effective or has particular limitations.
For example, a lot of what you hear around edge computing is a need for reducing latency. That round trip from where your data originates to the cloud to make a decision and come back, can be prohibitive, especially in real-time situations. Think about a manufacturing environment … those milliseconds matter, they could mean the difference between a safety scenario or product defects.
The other case that you see a lot is more control about where data resides. So in Europe, we have all heard about GDPR as being a regulation; we have some similar ones in the United States. And the reality is more and more of these will occur where the sovereignty around where data lives is important.
And then, even more so, you are starting to see this show up in the context of business continuity. What happens if the public cloud or the network between you and the public cloud is all of a sudden not available? We need some way of being able to continue to run that application. If you are a retail business, for example, and you had some sort of outage in the back end of the system, you still need to process transactions. You still need to understand your inventory.
So dedicated cloud solutions are here to address that.
On data in the multicloud:
Seeking distributed data consistency, 75% of organizations will implement tools for multicloud data logistics by 2024, using abstracted polices for data capture, migration, security, and protection. Now what this is all about gets to that multicloud story—most companies are finding themselves in that place, whether they intended to or not. And as that complexity grows, they need to sort of reevaluate not just their policy around how do I want to handle things like data retention or how do I want to apply security to data, but being able to do it consistently across multiple clouds.
When you start off in these types of environments, you might be able to address this in a sort of manual fashion. But ultimately over time, that scalability and just the potential for human error means that people are going to continue to invest in more automated tools that can help ensure that consistency and drive what people are kind of talking about in this dataops concept of really it being ingrained in the processes and procedures.
On cloud economics:
By 2023, 80% of organizations using cloud services will establish a dedicated finops function to automate policy-driven observability and optimization of cloud resources to maximize value.
So, this is one of the unexpected side effects of mass cloud option. The ease of spinning up resources reduced a lot of the friction on the upfront side of things, but it introduced a new problem. It introduced the unexpected bill….
And one of the problems has been is that there has not always been—in many companies—a single person responsible for understanding all that. And that is because there are many factors that go into cloud costs. Some of it is architectural; there is difference between moving monolithic workloads to the cloud versus if you had taken advantage of some of those app modernization techniques to get down to container-based or serverless functions. [Another factor] is around operations. How close are you monitoring or right-sizing the instances that you need with the workloads that you have? And how quickly are you realigning them when needed? And are you automating the spin up and spin down of resources when they are underutilized? That whole operational sort of efficiency is typically owned by an ops team.
[A third factor is] the commercial terms that go around a lot of the cloud costs. Are you taking advantage of reserved instances or spot instances? Or things like volume discounts on contracts…?
And so the challenge is not only are there these potentially uncontrolled costs, but there hasn’t necessarily been a place to go. And so this idea of finops is really, whether it is a single person within the organization or whether it is a group of people, assigning that accountability. Because ultimately, if you have observability and you are looking at this environment, you can go back to those three areas and figure out what levers can we pull? What things can we do to make sure that we are being efficient around our cloud resource spend and how we think about it as our solutions scale?
This article originally appeared in CIO’s Center Stage newsletter. Subscribe today!
Cloud Computing, Multi Cloud
Read More from This Article: IDC: 4 cloud investment predictions
Source: News