Open clouds power innovation, agility, and efficiency

Cloud computing gets headlines every day, with predictions about the percent of workloads moving to large public cloud providers. While there is some momentum in the IT industry to move to public cloud environments, this choice is not automatic. Several decisions must be looked at closely before moving critical workloads to a public cloud provider. Different computing environments serve different needs of IT organisations, and one environment is not ideal for every enterprise.

  • 2 years ago Posted in

Large public cloud providers certainly offer a range of services and enable specific workloads to scale to very large computing or storage needs. With a range of services provided, the large public cloud providers can offer a range of capabilities for many different workloads and requirements. However, there are many cases where a remote public cloud provider may not be able to satisfy the needs of an IT department. An on-prem data centre (even if operated as an internal cloud) would be ideal in these cases. It can be architected and deployed to serve the users better, whether in-house or external.

Open Standards

While the instruction set for most clouds is fairly standardised (but not completely), there are restrictions that public cloud providers impose on users. For example, there are limited choices for operating systems, accelerated computing hardware, and other parts of a hardware infrastructure or software environment that may not match up with what an enterprise needs for maximum performance or optimised workflows.

Thus, a public cloud could be considered proprietary, as IT organisations must use the hardware and software given to them, while an open cloud allows for complete customisation. A cloud built on open standards allows IT administrators to create a cloud computing environment to customise software and even physically modify (within warranty limits) servers and storage systems to suit their needs, which is impossible when using a public cloud provider.

The components that need to be considered when creating an efficient high performing cloud include:

Hardware

While there are several choices in terms of the CPUs for compute servers, the dominant CPU in use today uses the x64 instruction set. A leader in this category is 2nd Generation Intel Xeon Scalable Processors. However, the new 3rd Gen Intel Xeon processors eclipse the previous generation's performance. Thus, the question becomes that if the software is modified to take advantage of the new capabilities, how certain is the IT department that these new servers are online and available at a reasonable price? This standardisation allows for a wide range of applications to run without modification.

However, since there are several CPUs that fit this high-level requirement for a specific instruction set, the different options can make a sizeable native performance difference. Although virtualisation and containerisation technologies can abstract the underlying differences, the optimal matching of a CPU to the application will increase performances and potentially decreases energy consumption.

Software

The software stack required for a smooth-running cloud environment can be complicated and highly specific for individual mixes of workloads. The underlying libraries and management software requirements are almost guaranteed to be different from company-to-company. Without a wide range of choices that can easily be installed and configured on the underlying hardware, a cloud may not serve the needs of the users or system administrators. In addition, not all middleware and supporting software will run optimally on all CPUs. Choices abound for all layers in the software stack, and an open computing environment is a key to creating an efficient cloud computing system.

Networking

Many of today's most innovative applications require clusters of servers, sometimes working in coordination with each other to solve a complex problem, for instance, in large scale HPC simulations.

Different servers are used to perform more simple tasks in other scenarios, with each server being given a certain amount of work to do, completely independent from other servers. The networking between servers needs to be matched to the application requirements, both in latencies and bandwidth. An open based cloud service needs to be designed and implemented with the networking requirements defined.

Depending on the workloads and applications being used, different networking solutions may need to be used, and the application should not be locked into an environment that is not optimal.

Business results

An open cloud environment has numerous benefits for organisations and enterprises that wish to control their IT infrastructure. Higher efficiency of the IT infrastructure is a primary benefit, both in terms of using less energy when workloads can be more closely matched to the computing and storage hardware.

In addition, more work gets performed at a lower cost when the right sized infrastructure is closely matched to the needs of the enterprise. Another benefit that open clouds bring is that an internal cloud, for example, can be adapted quickly due to changing workloads. This is advantageous whether there is a need to scale up or down due to business cycles. New and more technological hardware can be integrated quickly, and the workloads that need the increased performance can be easily assigned to the new servers or storage systems.

Systems have many tuning parameters that allow different applications to run faster than if the tuning parameters are not set correctly. Having direct control over these parameters leads to better utilisation of the infrastructure components. New systems that contain multiple sockets with multiple cores, combined with the latest GPUs, can be optimised to deliver results much faster than previous generations of systems.

As new technologies are introduced and made available, on-prem or open cloud data centres can quickly integrate the latest technologies, even before official product announcements. This gives IT administrators the ability to test new hardware with real world applications to meet demanding SLAs from their user communities. Decisions can then easily be made whether to invest in new hardware as part of ongoing refresh cycles. New hardware purchased for an internal data centre has a known cost, which is part of a known budgeting process. This knowledge can be factored into a budget, and the TCO can be calculated compared with a public or proprietary cloud where cost amounts can be unexpected.

The Return-on-Investment, or ROI, increases as new workloads are assigned to new technologies, and keeping the systems busy. An open cloud approach based on open standards allows for a best-of-breed combination of hardware and software, increasing the ROI of the IT infrastructure.

Clouds come in many forms and delivery mechanisms. While there is much discussion about which form of a cloud to use, the higher-level concern should be whether to implement a cloud based on open standards or use a proprietary one. The cloud computing market, estimated to be $364B in 2022 according to a recent Gartner analyst report, is evolving. Different enterprises will need to determine what is important to them, not just today but moving forward as well.

By Martin Hosken, Field CTO, Cloud Providers, Broadcom.
By Jake Madders, Co-founder and Director at Hyve Managed Hosting.
By Terry Storrar, Managing Director at Leaseweb UK.
By Dave Errington, Cloud Specialist, CSI Ltd.
By Rupert Colbourne, Chief Technology Officer, Orbus Software.
By Jake Madders, Co-founder and Director of Hyve Managed Hosting.