The logical progression from the virtualization of servers and storage in VSANs was hyperconvergence. By abstracting the three elements of storage, compute, and networking, data centers were promised limitless infrastructure control. That promised ideal was in keeping with the aims of hyperscale operators needing to grow to meet increased demand and that had to modernize their infrastructure to stay agile. Hyperconvergence offered elasticity and scalability on a per-use basis for multiple clients, each of whom could deploy multiple applications and services.
There are clear caveats in the HCI world: limitless control is all well and good, but infrastructure details like lack of local storage and slow networking hardware restricting I/O would always define the hard limits on what is possible. Furthermore, there are some strictures emplaced by HCI vendors that limit flavor of hypervisor or constrain hardware choices to approved kit. Worries around vendor lock-in surround the black-box nature of HCI-in-a-box appliances, too. The irony is that HCI promises consolidation but can require bringing in another vendor for the HCI component: the needed presence of Hyperflex or AOS in the stack adds another company to the roster of suppliers.
The elephant in the room for converged and hyperconverged infrastructures is indubitably cloud. It’s something of a cliché in the technology press to mention the speed at which tech develops, but cloud and cloud-native technologies like Kubernetes are showing their capabilities and future potential in the cloud, the data center, and at the edge. The concept of HCI was presented first and foremost as a data center technology and was clearly the sole remit, at the time, of the very large organization with its own facilities. Those facilities are effectively closed loops with limits created by physical resources.
However, cloud facilities are now available from hyperscalers at attractive prices to a much broader market: perhaps not mom-and-pop shops, but just about everyone else. HCI Market report and Global HCI report predict the market for HCI solutions to grow significantly, with year-on-year growth pretty much agreed on at just under 30%. Vendors are selling cheap(er) appliances and lower license tiers to try and mop up the midmarket, and hyperconvergence technologies are beginning to work with hybrid and multi-cloud topologies. The latter trend is demand-led: After all, if an IT team wants to consolidate its stack for efficiency and easy management, any consolidation has to be all-encompassing and include local hardware, containers, multiple clouds, and edge installations. That ability also implies inherent elasticity, and by proxy, a degree of future-proofing baked in.
The cloud-native technologies around containers are well-beyond flash-in-the-pan status. The CNCF (Cloud Native Computing Foundation) has published stats [PDF] that show the burgeoning use of the technology: Even last year (2021), developers reported adoption rates in large organizations for containers at 83% and Kubernetes at 78%. Portable, scalable, platform-agnostic, and easily configurable, containers are the natural next evolution in virtualization. CI/CD workflows are happening, increasingly, with microservices at their core. So, what of hyperconvergence in these evolving computing environments? Evolution means change, but not necessarily overnight changes. The fashionable term du jour is “fog computing,” which neatly describes a blurring of the lines between hybrid, data center, multi-cloud, and edge technologies. Containers have established a firm foothold in edge environments – they’re well suited to where resources are slight or transient – but they are not a solo act. HCI solutions have to handle microservices alongside full-blown VMs, IoT, and bare-metal, everywhere. It can be done with “traditional” hyperconvergence but at a cost.
Open-source is, of course, at the heart of container-based development, and while that doesn’t exclude Docker, Podman, K8S, et al. sitting easily alongside black box appliances, there may never be the level of integration and central control that operational staff would like to see. There will always be, for instance, a disjoin between managing VM clusters and containerized applications and services. In practical terms, open APIs and software like KubeVirt solve some of those issues for developers, but they are not ideal at scale and will limit the type of elasticity and strategic responsiveness IT departments are trying to offer the business. And, of course, nagging doubts over vendor lock-in and cost might remain, too, plus the sense that HCI solutions are a square peg trying to fit into a round…container.
Every open-source application was born out of the need to scratch an itch, whether suffered by a globe-straddling organization or an individual. German-based MNC SUSE has just launched Harvester 1.0, the world’s first hyperconverged infrastructure solution that’s 100% open-source. It was designed to bring virtual machines into the scope of a Kubernetes-based workflow, and it provides that single point of creation, monitoring, and control of an entire stack. That was something there was clearly a need for. And, because containers effectively run anywhere, from tiny SOC ARM boards up to supercomputing clusters, the technology is perfect for organizations with workloads spread over multiple clouds and local instances — hybrid or foggy setups, in other words.
Harvester represents how organizations can deploy HCI without proprietary closed solutions, using enterprise-grade open-source software that slots right into a modern CI/CD pipeline. The transitions from proof-of-concept tests to full production status are aided by the SUSE engineering support teams, and, helping the process further, Harvester integrates with Rancher very neatly. Harvester is ready for deployment on the edge (or anywhere) as a seamless addition to the developer’s and sysadmin’s toolkit.
Read More from This Article: Harvesting the Benefits of Cloud-Native Hyperconvergence
Source: News