Volume gets a lot of the press when it comes to data. Size is right there in the once-ubiquitous term “Big Data.”
This isn’t a new thing. Back when I was an IT industry analyst, I once observed in a research note that marketing copy placed way too much emphasis on the bandwidth numbers associated with big server designs, and not enough on the time that elapses between a request for data and its initial arrival – which is to say the latency.
We’ve seen a similar dynamic with respect to IoT and edge computing. With ever-increasing quantities of data collected by ever-increasing numbers of sensors, surely there’s a need to filter or aggregate that data rather than shipping it all over the network to a centralized data center for analysis.
[ Read also: IT decision-makers are prioritizing digital transformation. ]
Indeed there is. Red Hat recently had Frost and Sullivan conduct 40 interviews with line-of-business executives (along with a few in IT roles) from organizations with more than 1,000 employees globally. They represented companies in manufacturing, energy, and utilities split between North America, Germany, China, and India. When asked about their main triggers to implement edge computing, bandwidth issues did come up, as did issues around having too much data at a central data center.
Latency, connectivity top the list
However, our interview subjects placed significantly more emphasis on latency, and more broadly, their dependence on network connectivity. Triggers such as the need to improve connectivity, increase computing speed, process data faster and on-site, and avoid data latency resulting from transferring data to the cloud and back were common themes.
For example, a decision-maker in the oil and gas industry told us that moving compute out to the edge “improves your ability to react to any occasional situation because you no longer have to take everything in a centralized manner. You can take the local data, run it through your edge computing framework or models, and make real-time decisions. The other is in terms of the overall security. Now that your data is not leaving, and it is both produced and consumed locally, the risk of somebody intercepting the data while it is traversing on the network pretty much goes away.”
For another data point, a Red Hat and Pulse.qa IT community poll found that 45% of 239 respondents said that lower latency was the biggest advantage of deploying workloads to the edge. (And the number-two result was optimized data performance, which is at least related.) Reduced bandwidth? That was down in the single digits (8%).
Latency loomed large when we asked our interview subjects what they saw as the top benefits of edge computing.
Latency also loomed large when we asked our interview subjects what they saw as the top benefits of edge computing.
The very top benefits cited were related to immediate access to data, including wanting data to be accessible in real-time so that it can be processed and analyzed immediately on-site, eliminating data delays caused by data transfers, and having 24/7 access to reliable data – raising the possibility of constant analysis and availability of quick results. A common theme was actionable local analysis.
Cost as a benefit of edge computing did pop up here and there – especially in the context of reducing cloud usage and related costs. However, consistent with other research we’ve done, cost wasn’t cited as a primary driver or benefit of edge computing. Rather, the drivers are mostly data access and related gains.
Hybrid cloud, data are drivers
Why are we seeing this increased emphasis on edge computing and associated local data processing? Our interviews and other research suggest that two reasons are probably particularly important.
The first is that 15 years after the first public cloud rollout, IT organizations have increasingly adopted an explicit hybrid cloud strategy. Red Hat’s 2022 Global Tech Outlook survey found it was the most common strategy for cloud among the over 1,300 IT decision-maker respondents.
Public cloud-first was the least common cloud strategy and was down a tick from the previous year’s survey. This is consistent with data we’ve seen in other surveys.
None of this is to say that public clouds are in any way a passing fad. But edge computing has helped to focus attention on computing (and storage) out at the various edges of the network rather than totally centralized at a handful of large public cloud providers. Edge computing has added a rationale for why public clouds will not be the only place where computing will happen.
The second reason is that we’re doing more complex and more data-intensive tasks out at the edge. Our interviewees told us that one main trigger for implementing edge computing is the need to embrace digital transformation and implement solutions such as IoT, AI, connected cars, machine learning, and robotics. These applications often have a cloud component as well. For example, it’s common to train machine-learning models in a cloud environment but then run them at the edge.
We’re even starting to see Kubernetes-based cluster deployments on the edge using a product such as Red Hat OpenShift. Doing so not only provides scalability and flexibility for edge deployments but also provides a consistent set of tools and processes from the data center to the edge.
It’s not surprising that data locality and latency are important characteristics of a hybrid cloud of which an edge deployment may be a part. Observability and monitoring matter too. So do provisioning and other aspects of management. And yes, bandwidth – and the reliability of links – plays into the mix. That’s because a hybrid cloud is a form of a distributed system, so if something matters in any other computer system, it probably matters in a distributed system too. Maybe even more so.
To learn more, visit Red Hat here.
Edge Computing
Read More from This Article: Edge Computing: Latency Matters
Source: News