10 steps to avoiding storage virtualisation pitfalls

By Rebecca Thompson, VP Marketing, Avere Systems.

  • 10 years ago Posted in

Watching the decade-long growth of virtualisation within the server space has led to the assumption that it is nothing but a good thing. By large, the technology is fantastic for improving the utilisation and efficiency of computing, but virtualisation places great strains on storage. Rebecca Thompson, VP Marketing for Avere Systems, looks at the issues and provides 10 top tips for how clever use of Edge filers deals with the perils of virtualisation.
Prior to the advent of virtualisation, servers typically placed a predictable load on storage. Each server ran a determined application, and the resulting load placed on the storage consisted of a well-understood pattern of I/O against local system’s disks. Rotating hard disks handled this type of data access well, and everyone was happy.


However, with the arrival of server virtualisation and many virtual servers running on a single physical server, the formerly well-understood data accesses of each virtual server get broken up and mixed with those of all the other virtual servers. The end result of this “blender effect” is that random access patterns put great demands on the drive arms of rotating hard disks. In addition, the Virtualisation of Desktops (VDI) adds an additional unique challenge for storage, called boot storms, a particularly frustrating performance drain when the storage controller that underpins VDI sessions are simultaneously hit with many more service requests than it can handle. Consequently, workers could find themselves waiting 15 to 20 minutes for their machines to boot up, which impacts productivity.


The arrival of virtualisation and the strain it places on storage forces organisations to revaluate best practice, and in many circumstances, update platforms to cope with I/O blender effects and boot storms. However, few organisations want to have to rip and replace complete platforms just to benefit from virtualisation. Instead, many organisations are looking at innovations around “Edge filers” that use SSD and Flash as a way of counteracting the negative impact of virtualisation.


An Edge filer meets the need for more performance without wholesale replacement of older disk drives with newer, faster drives, which can be prohibitively expensive and may deliver only a marginal improvement anyway. Instead, the Edge filer offers a cost-effective and scalable solution to this dilemma by adding an intermediate tier of very high performance storage. However, there are some areas to consider when deploying the technology. Below are ten steps to follow to avoid the pitfalls of storage virtualisation.


1. Move data closer to users to reduce latency issues: Organisations that need rapid response from remote sites need to make sure that distance doesn’t get in the way of productivity. Strategically placed Edge filers can eliminate hard-drive, Core filer CPU, and network latency.


2. Plan for future growth: You know what storage you need now, but how do you know what your needs might be in three or five years from now? If virtualisation is only at 50% within your organisation, imagine it at 100% and then consider the options. Think about how any platform will deal with a worst-case scenario and work out how you would practically scale to that eventuality.


3. Tiered storage can reduce costs: Some of the items kept in your primary storage are accessed frequently while others are accessed rarely if at all. Allow frequently accessed items to reside on faster storage on Edge filers, but save money by tiering the rarely accessed items behind Core filers on cheaper, slower storage.


4. Simplify with a single global namespace: If parts of your storage system come from different vendors, or if this might become true in the future, finding all of your data from a single point of access can be challenging. Integrating multiple Core filers behind a single Edge filer and global namespace could facilitate managing the bulk of your data.


5. Save money by optimising storage: Automatically place NAS data in the most appropriate storage tier for its activity level. You might even want to consider placing some data on tape, which is still a viable option for longer-term archive. In addition, an Information Lifecycle Management policy should be agreed, implemented and regularly reassessed using expertise from business leaders and IT staff.


6. Separate capacity from performance: Use Edge filers for only the amount of active data residing on fast storage that you need. Put your rarely accessed data in cheap, slower storage devices. Although many modern Edge Filers include automatic tiering, an analyses of what is frequently accessed is vital to help size any implementations.


7. Get the most value from Edge filer storage: Maximise performance delivered to remote sites by giving them an Edge filer to serve the data they are most likely to need next. This includes regional data sets or information known to be needed for a limited amount of time.


8. Alleviate problems caused by a lot of users doing the same thing at the same time: Boot storms result when too many users try to access the same file resource at the same time. Spread out demand if you can by creating multiple sources including Edge filer that resides on the closest network segment.


9. Reduce capital expenditures with cloud computing: If your data is in the cloud, you don’t have to buy and maintain a lot of expensive storage. A small, relatively inexpensive edge filer to eliminate latency may be all that you need.


10. Provide robust data protection while retaining high productivity: Consider mirroring technologies that enable you to distribute copies of your critical data across multiple storage systems in multiple locations. No single disaster will take you out. Productivity remains high because of the ease of access to the data no matter where you are on the network.


Virtualisation is transforming the IT world by bringing greater efficiency to the data centre, but it also places enormous strains on storage, as seen in I/O blender effects and boot storms that degrade storage performance. Accessing data without frustrating and costly delays (i.e. latency) becomes more difficult as the amount of data stored grows larger and spreads out across multiple locations. By going beyond the limits of the conventional Core filer, organisations can avoid the pitfalls of virtualisation, eliminate latency and maintain highly performing operations, even as the amount of data they store multiplies.
 

Quest Software has signed a definitive agreement with Clearlake Capital Group, L.P. (together with...
Infinidat has achieved significant milestones in an aggressive expansion of its channel...
Nearly all senior business decision-makers (96%) surveyed report data strategies as essential to...
SharePlex 10.1.2 enables customers to move data in near real-time to MySQL and PostgreSQL.
NetApp extends its collaboration to accelerate Ducati Corse’s digital transformation and deliver...
Partnership to be featured at COP26, highlighting how data-driven solutions and predictive...
Next-Gen solutions to deliver market-leading enterprise cloud scalability, cyber resilience and...
he EMEA external storage systems market value was up 3.3% year on year in dollars but down 5.5% in...