From telecommunications networks to the manufacturing floor, through financial services to autonomous vehicles and beyond, computers are everywhere these days, generating a growing tsunami of data that needs to be captured, stored, processed, and analyzed.
At Red Hat, we see edge computing as an opportunity to extend the open hybrid cloud all the way to data sources and end users. Where data has traditionally lived in the datacenter or cloud, there are benefits and innovations that can be realized by processing the data these devices generate closer to where it is produced.
This is where edge computing comes in.
What is edge computing?
Edge computing is a distributed computing model in which data is captured, stored, processed, and analyzed at or near the physical location where it is created. By pushing computing out closer to these locations, users benefit from faster, more reliable services while companies benefit from the flexibility and scalability of hybrid cloud computing.
Edge computing vs. cloud computing
A cloud is an IT environment that abstracts, pools, and shares IT resources across a network. An edge is a computing location at the edge of a network, along with the hardware and software at those physical locations. Cloud computing is the act of running workloads within clouds, while edge computing is the act of running workloads on edge devices.
You can read more about cloud versus edge here.
4 benefits of edge computing
As the number of computing devices has grown, our networks simply haven’t kept pace with the demand, causing applications to be slower and/or more expensive to host centrally.
Pushing computing out to the edge helps reduce many of the issues and costs related to network latency and bandwidth, while also enabling new types of applications that were previously impractical or impossible.
1. Improve performance
When applications and data are hosted on centralized datacenters and accessed via the internet, speed and performance can suffer from slow network connections. By moving things out to the edge, network-related performance and availability issues are reduced, although not entirely eliminated.
2. Place applications where they make the most sense
By processing data closer to where it’s generated, insights can be gained more quickly and response times reduced drastically. This is particularly true for locations that may have intermittent connectivity, including geographically remote offices and on vehicles such as ships, trains, and airplanes.
3. Simplify meeting regulatory and compliance requirements
Different situations and locations often have different privacy, data residency, and localization requirements, which can be extremely complicated to manage through centralized data processing and storage, such as in datacenters or the cloud.
With edge computing, however, data can be collected, stored, processed, managed, and even scrubbed in-place, making it much easier to meet different locales’ regulatory and compliance requirements. For example, edge computing can be used to strip personally identifiable information (PII) or faces from video before being sent back to the datacenter.
4. Enable AI/ML applications
Artificial intelligence and machine learning (AI/ML) are growing in importance and popularity since computers are often able to respond to rapidly changing situations much more quickly and accurately than humans.
But AI/ML applications often require processing, analyzing, and responding to enormous quantities of data which can’t reasonably be achieved with centralized processing due to network latency and bandwidth issues. Edge computing allows AI/ML applications to be deployed close to where data is collected so analytical results can be obtained in near real-time.
Red Hat’s approach to edge computing
Of course, the many benefits of edge computing come with some additional complexity in terms of scale, interoperability, and manageability.
Edge deployments often extend to a large number of locations that have minimal (or no) IT staff, or that vary in physical and environmental conditions. Edge stacks also often mix and match a combination of hardware and software elements from different vendors, and highly distributed edge architectures can become difficult to manage as infrastructure scales out to hundreds or even thousands of locations.
The Red Hat Edge portfolio addresses these challenges by helping organizations standardize on a modern hybrid cloud infrastructure, providing an interoperable, scalable and modern edge computing platform that combines the flexibility and extensibility of open source with the power of a rapidly growing partner ecosystem.
The Red Hat Edge portfolio includes:
- Red Hat Enterprise Linux and Red Hat OpenShift, which are designed to be the common platform for all of an organization’s infrastructure from core datacenters out to edge environments.
- Red Hat Advanced Cluster Management for Kubernetes and Red Hat Ansible Automation Platform provide the management and automation platforms needed to drive visibility and consistency across the organization’s entire domain.
- Finally, the Red Hat Application Services portfolio provides critical integration for enterprise applications while also building a robust data pipeline.
The Red Hat Edge portfolio allows organizations to build and manage applications across hybrid, multi-cloud, and edge locations, increasing app innovation, speeding up deployment, and updating and improving overall DevSecOps efficiency.
To learn more, visit Red Hat here.
Edge Computing
Read More from This Article: Navigating the Growing Data Tsunami with Edge Computing
Source: News