Decoupled Storage: Server-Side Performance without the Hyperconvergence Headaches

As the number of Virtual Machines (VMs) in a data centre place an increasing load on back-end shared storage, performance bottlenecks arise. This is driving a surge in demand for solid state storage (i.e. flash), which can improve storage access times by 1000x or more. By Nick Suh, Head of Product Marketing, PernixData.

  • 8 years ago Posted in

But another performance challenge also exists – network latency. Every transaction going to and from a VM must traverse various chokepoints, including the Host Bus Adapter (HBA) on the server, storage fabric, and storage controllers. To address this, many companies are placing active data on the host instead of on back-end storage to shorten the distance (and time) for each read/write operation. 

Hyperconvergence addresses the above by putting tiered storage using solid-state technology inside servers. In this respect, it brings incremental performance gains to several applications, like VDI. But, architecturally it introduces various drawbacks, particularly around flexibility, cost, and scale. Perhaps most significantly, it causes substantial disruption to the data centre.

Is there is a better way to get the advantages of server-side flash without the hyperconvergence hangover? The answer is “decoupled” storage.

 

Hyperconvergence Hiccups

As mentioned above, hyperconvergence improves VM performance by leveraging server flash for key storage I/O functions. But combining the functions conventionally provided by two discrete systems – servers and storage – requires a complete overhaul of the IT environment currently in place. It creates new business processes (e.g. new vendor relationships, deployment models, upgrade cycles, etc.) and introduces new products and technology to the data centre, which creates disruption for any non-green field deployment. The storage administrator, for example, may need to re-implement data services (e.g. snap shots, cloning, replication, etc.), restructure processes for audit/compliance, and require training to become familiar with a new user interface and/or tool, among many other changes to entrenched work flows currently employed for daily operations.

Another major compromise imposed by hyperconvergence stems from the modularity often touted as one of its key benefits. Because the de facto mode of scaling a hyperconverged environment is to simply add another appliance, it restricts the ability of the administrator to precisely allocate resources to meet the desired level of performance without similarly adding capacity. This might be ok for some applications where performance and capacity typically go hand in hand, but it is an inefficient way to support other applications, like virtualized databases, where that is not the case.

For instance, let’s consider a service being supported by a two node cluster of hyperconverged systems. In order to reach the desired performance threshold, an additional appliance must be added. While the inclusion of the third box has the desired performance outcome, it forces the end user to also buy unneeded capacity. This overprovisioning is unfortunate for several reasons: (a) it is an unnecessary hardware investment; (b) it can require superfluous software licenses; (c) it consumes valuable data centre real estate and (d) it increases environmental (i.e. power and cooling) load.

Finally, hyperconverged systems restrict choice. They are typically delivered by a vendor who requires the use of specific hardware (and accompanying software for data services). Or they are packaged to adhere to precisely defined specifications that precludes customisation. In both scenarios, deployment options are limited. Organisations with established dual-vendor sourcing strategies or architects desiring a more flexible tool to design their infrastructure will need to make significant concessions to adopt this rigid model.

 

The New “Decoupled” Paradigm

A new “decoupled” architecture has emerged to strike the right balance between innovation and disruption. Like hyperconvergence, it puts storage performance in the server, using high speed server media like flash (and RAM). But unlike hyperconvergence, it leaves capacity and data services in shared storage arrays. 

 

By decoupling storage performance from capacity, several benefits can be achieved:

· Fast VM performance by putting storage intelligence in high speed server media. Unlike hyperconverged, this can be Flash, RAM or any other technology that emerges in the coming months/years.

· No vendor lock in, as decoupled architectures leverage any third party server and storage hardware.

· Cost effective scale-out. Additional storage performance can be easily added simply by adding more server media. Capacity is handled completely separately, eliminating expensive over-provisioning.

· No disruption. Decoupled software is installed inside the hypervisor with no changes to existing VMs, servers or storage.

· Easy technology adoption. With complete hardware flexibility, you can ride the server technology curve and leverage the latest media technology for fast VM performance (E.g. SSD, PCIe, NVMe, DRAM, etc.)

Once in place, a decoupled storage architecture becomes a strategic platform to better manage future growth. Because performance and capacity are isolated from one another in this structure, they can be tuned independently to precisely meet the user requirements.

 

Server-Side Performance, Without the Headaches

IT operators are often faced with the duality of the desire to gain a competitive edge by adopting new technology all the while ceaselessly looking to mitigate risk. Often, one has to be prioritised above the other. In the case of hyperconvergence, pushing the innovation envelope involves compromising flexibility and accepting institutional changes to fundamental operating procedures in the data centre. Decoupled storage architectures, on the other hand, offer the rare opportunity to take advantage of two major industry trends – data locality and faster storage media – to speed virtualized applications to unprecedented levels in a completely non-intrusive manner; in essence, all the performance benefits of hyperconvergence (and more) without any of the disruption. 

Quest Software has signed a definitive agreement with Clearlake Capital Group, L.P. (together with...
Infinidat has achieved significant milestones in an aggressive expansion of its channel...
Nearly all senior business decision-makers (96%) surveyed report data strategies as essential to...
SharePlex 10.1.2 enables customers to move data in near real-time to MySQL and PostgreSQL.
NetApp extends its collaboration to accelerate Ducati Corse’s digital transformation and deliver...
Partnership to be featured at COP26, highlighting how data-driven solutions and predictive...
Next-Gen solutions to deliver market-leading enterprise cloud scalability, cyber resilience and...
he EMEA external storage systems market value was up 3.3% year on year in dollars but down 5.5% in...