Infinidat predictions for 2016

By Randy Arseneau, CMO, INFINIDAT.

  • 8 years ago Posted in
With 2015 behind us, Randy Arseneau, CMO of enterprise data storage solutions provider INFINIDAT, looks at what shaped the industry over the past 12 months, and what’s likely to shape – and shake – it in 2016.
 
Among the hottest technologies of the past year I can confidently say that the two most prominent were flash - anything flash, all-flash arrays, flash-optimized storage - and hyperconvergence.
 
That was what people were talking about, where investments were flowing, and what the market was most excited about. So, as you look into next year, flash and hyperconvergence will remain very active and very front of mind for a lot of people in the industry.
 
I think the hyperconverged space is going to continue to grow. For many reasons and in a lot of use cases and virtualized environments, it's a quick time to market, it's easy to implement, and it's easy to manage. It doesn't necessarily, at this point anyway, scale to the point where it can support the very large or the very performance-sensitive, mission-critical workloads, but it will continue to improve over time. It will remain popular in 2016 and beyond. There'll probably be some interesting exit events or consolidation events that will occur along the way, but hyperconvergence will continue to hold a key spot in the storage industry.
 
On the flash side, I think there're going to be some interesting products that will develop over the next 12-18 months. There's a lot of competition, there's a lot of noise, there's a lot of fragmentation among the flash players. That’s not necessarily sustainable over the long term, so I think that you'll see some consolidation and recalibration of expectations.
 
Obviously some new players will emerge, but I think you'll also see some of the recent phenoms and established players try to realign their business models a little bit and maybe tackle the market in slightly different ways with their solutions. Flash will continue to be very pervasive, and new technologies will still make the economics of flash more desirable. But in the foreseeable future they are unlikely to overtake the economic advantages and the cost elasticity of traditional spinning drives, which is why, from my perspective as a provider of hybrid systems, I think these systems will continue to be a very prevalent and prominent player in the industry.
 
With flash, the changes will be primarily around density as flash becomes more prevalent and emerging technologies start to be brought to market. Although I don't think we'll see a huge shift in the supply chain in 2016; it'll be more incremental. But the economics will continue to improve. They're still going to be heavily reliant upon aggressive use of data reduction technologies, which have their issues. They're necessary, and they work, and they certainly provide economic advantages when you look at the cost per unit of storage capacity in media, but there will also reach a point where data reduction technologies will cap out: they'll lose their ability to further compress and further reduce the footprint of data, which will put increasing pressure on the cost elasticity of the media itself.
 
Going back to a comment earlier, I think that if you look at the roadmaps and align them with solid state media against near-line media, for lack of a better term, or general purpose spinning magnetic media, the elasticity, and the areal density will continue to be hugely advantageous in favour of spinning magnetic media. So while the performance characteristics and the performance aspects of workloads will be best suited and best optimized by intelligent use of solid state technology, the volumes of data that need to be stored will continue to require a much more cost effective form of near-line storage and that will continue to be the case through 2016 and well beyond.New emerging technologies and changes to the storage landscape
 
Over the coming 12 months huge R&D dollars will continue to be poured into next-generations or subsequent generations of solid state media and memory class products that will drive down costs, drive up performance and areal density.
 
On the software or the application part of the stack, Open Source is going to continue to proliferate and extend and expand. I think you'll see a lot more consortiums or groups stepping up to provide hardened distributions of certain Open Source products and selling them into the marketplace at an attractive price. And they'll start to attract more and more enterprises to the Open Source community. OpenStack as an example is going to continue to be very prominent, although it’s had a difficult time cracking the real, again, mission-critical ‘bet your business’ kind of workloads in the enterprise for a wide variety of reasons: scale, manageability, supportability. As more constituencies step into that space and provide hardened solutions and supported platforms and stacks, it will gradually wear down some of the resistance in the enterprise and you'll start to see it become more prominent, which then, of course, will create additional and traditional vendors and suppliers who are selling packaged solutions to the enterprise.
 
Throughout 2016 the cloud will continue to be the main focus for most organizations looking for ways to cost effectively and operationally maintain their ever-growing volumes of data. So cloud providers will continue to offer a lot of services and continue to grow. The challenge with that is that as you're seeing a lot of these larger, very computationally intensive workloads, it's cost effective to put those workloads into the cloud and have a truly core-intensive and compute-intensive workload running in a cloud environment. But when you start trying to apply those workloads to very large data sets, the cost of storage in the cloud becomes prohibitive very quickly. So it's going to really inhibit the ability of some of these large, analytical type workloads to be put into the cloud in a cost-effective way.
 
So, I think it's going to drive much more of a trend towards a next generation hybrid cloud, where you've got customers that will have their data residing on their storage, either co-located or on premise in some way, and serving or servicing compute workloads or workloads that are developing or providing their compute in the cloud. And that's going to create a lot of interesting possible consumption and deployment models that, frankly we haven't really conceived yet. There are some solutions that are starting to emerge, but I think it's going to open up opportunities for clever vendors and suppliers to devise solutions that help solve the disproportion and disparity between the relative cost of compute and storage in the cloud, and make cloud much more attainable and sustainable to a larger percentage of workloads and most enterprises.
 
The role of storage and considerations to address
 
I don't think that the role of storage within a typical enterprise will necessarily change. If anything, I would say storage is becoming increasingly important, again, as the volumes of data continue to grow explosively across the board, and as the importance of that data, in terms of mining it for value and using it to devise transformational business processes, continues to extend and accelerate. If anything, I think storage will continue to increase in importance for a long time in all segments of the market. Even in recent years, storage has tended to be kind of an afterthought. You build your infrastructure or you build your application stacks. But storage had tended to be thought of as more of an accessory that you have to attach to your servers. I think that's really changing to the point where storage is becoming a much more prominent strategic asset as opposed to just being sort of a necessary evil. I think you'll see, again, that as data volumes increase, in 2016 storage will play an even more prominent and more strategic role in infrastructure decisions and also in how workloads are deployed.
 
It all comes back to the importance of the economics, right? In the last few years with the emergence of flash and all-flash storage solutions, enterprises have been forced into a position where they've had to very carefully prioritize and segment their workload portfolio and make sure that they’re leveraging these very expensive or comparatively expensive assets to host the most critical workloads that have the most performance sensitivity and the highest throughput requirements, those that are the real mission-critical workloads. But if you look across a typical enterprise, those workloads represent maybe 5-10% of their overall portfolio. They've got lots of other workloads, which don't have the same ultra-stringent performance or latency requirements.So once again, since we have yet to achieve that sort of parity where despite what a lot of all-flash vendors say, the economic parity between spinning magnetic median flash is not there, and won't be there for the foreseeable future, from an economic perspective there will always be a need, I think, for solutions that can cost-effectively store large volumes of data and in an automated, intelligent, adaptive way, identify what portion of that data needs to be on the fastest tier at any given time to support the business. So the shift or the change to flash is obviously here to stay. It's not going away, and it's going to continue to grow as the percentage of the overall footprint of storage. But I think data growth will always outstrip the growth of flash. And the economics of flash will never be able to keep pace with the growth of data. I think that's kind of a mathematical truism that we can all agree upon. So it's going to require us as an industry and our customers to think of new ways to manage these exploding volumes of data, but do so without breaking the bank.I think the concept of the all-flash data centre, while it might be possible or viable if you're a mid-sized organization that doesn't have a huge volume of data, is a long way from being a reality for the enterprise. I think we need to provide solutions that help enterprises bridge where they are today, and what that next generation, whether it's all-flash or all "fill in the blank" next generation memory technology, data centre looks like. That's minimally five years away in my opinion, probably longer. So there's a lot of time between now and then where I think that, as an industry, we need to deliver solutions to our customers that help them solve those problems, and continue to solve new generations of problems in an economically viable way.
 
Pain points for IT professionals
 
If you look at any survey or you talk to any analyst firm, they'll tell you that IT storage practitioners are struggling with cost, and that, yet again, they're dealing with ever-increasing volumes of data, and trying to manage, manipulate, protect, and move numerous workloads, at the same or, in certain cases, a lower cost than in previous years. There is a tremendous cost pressure on IT organisations. They're looking at the cloud, they're looking at tiering solutions, and they're looking at data reduction technologies in order to physically reduce the footprint of data that they've got to store, so they can keep the economics under control to some degree.
 
Given the increased pressure on IT professionals, it is difficult to resist the temptation to push all workloads onto flash, and aggressively use data reduction technologies across all workloads. This is particularly true if they are in an environment where their data volumes continue to grow, which is the case for most, then storage is very quickly going to eat them out of house and home. The only solution is to either aggressively archive data, which essentially makes it much less accessible and reduces time to value when someone needs to leverage that data to make business decisions, or it's going force a new economic model that will enable the IT team to store it without breaking the bank.So, again, there's challenges around cost. There are also a lot of challenges around sheer complexity. There are lots of different platforms out there. Most organisations are running a varied mix of workloads. They've not yet standardised that particular application platform, so that means they often are supporting multiple storage devices, multiple server platforms, and multiple cloud providers. Anytime someone introduces more moving parts into this already complex machine that's IT, it inherently increases risk and augments cost. It makes the ability to train, educate, and enable the staff to support these environments more complicated and costlier. So those two -- cost and complexity -- I'd say are very closely interrelated. And those will continue to be issues forever.
By Gareth Beanland, Infinidat.
To ensure full confidence that your documents, spreadsheets, and correspondence are kept safe,...
By JG Heithcock, General Manager of Retrospect, a StorCentric Company.
Michael Del Castillo, Solutions Engineer, Komprise, looks at how to design a cloud storage strategy...
By Ezat Dayeh. Senior Systems Engineering Manager, Western Europe at Cohesity.
The past year significantly changed the way organisations protect and store their data. By Joe...
By Rainer W. Kaese, Senior Manager Business Development, Storage Products Division, Toshiba...