In their rush to the cloud, companies can easily end up with significant waste by taking a “best efforts” approach to aligning cloud instance types and sizes to workloads.
Businesses, particularly those that are relatively new to the cloud, often overprovision resources to ensure performance or avoid running out of capacity. The result is that their workloads may consume a fraction of the resources being paid for. Even organizations experienced with cloud infrastructure can waste 20% to 30% of their cloud spending on capacity that simply isn’t needed.
Compounding the challenge is the fact that the major cloud service providers (CSPs) offer as many as 600 different service options based on factors such as processor type, memory configuration, storage, networking, hypervisor, and other variables. Understanding all these options is impractical – if not impossible – for humans, let alone determine the best fit for a given workload, especially at scale. What’s more, the cloud options, and workloads being hosted, change all the time.
Complexity is amplified by the fact that 90% of enterprises use multiple clouds, according to IDC. Relying on people to manually select the right cloud instances is a risky proposition as even small mistakes can add up to big unanticipated costs. Analytics that take the guesswork out by determining the best selections, and ultimately automating instance configuration, is key. IDC research shows that capacity optimization has emerged as a top priority (alongside cost management) within cloud-based organizations.
Although the major CSPs all offer free onboarding and optimization functionality and services, they are typically quite basic with respect to analytics and focus on purchase plans and billing optimizations rather than configuration management. The free services also lack granular controls and detailed policies, and don’t explain how particular recommendations are reached.
Reducing costs is also more than just a matter of choosing instance types. By leveraging features within the hardware, customers can achieve higher performance and reduce the sizes of their instances, or reduce the number of instances required, or avoid paying for them entirely. For example, container images that are optimized to leverage specific processor features can be used to significantly improve throughput in containerized environments, without the need for additional CPU power.
Intel® Cloud Optimizer (ICO) by Densify illustrates how automation can be applied to cloud instance choice and configuration to achieve savings at all levels. It is a powerful matching engine that chooses which provider instances are the best choices for the customer’s workloads as well as optimal hardware and software configurations for each instance.
Configurable policies mean that ICO can be tuned to the characteristics of each unique workload. For example, when a company wants to optimize for cost in a development environment but optimize for performance in production. The software enables this fine level of management based on utilization-level targets specified by the customer.
Optimization is even more important for organizations that promote distributed decision-making, enabling staff like developers to make their own choices about which cloud instance types to use. The emerging discipline of FinOps, which is a management practice that promotes shared responsibility for cloud computing infrastructure and costs, brings discipline to this practice while cloud optimization tools make detailed tracking and accountability possible. This lets staff make choices quickly and deploy functionality for the business, while the organization can have the confidence that analytics will show them where optimization can happen after the fact.
IDC research found that 59% of IT automation projects pay off in less than 12 months. Given that the research firm also found that CEOs were more concerned with controlling IT costs than any other C-level executive, applying automation to cloud resource management just makes sense.