How do you see changes in cloud strategies developing in 2015?
Companies are committing to cloud in greater numbers. In our State of Resilience Report, we have asked IT professionals about their company strategies for cloud for a number of years. Since 2011, the number of companies that are using cloud has doubled, from 23 per cent up to 62 per cent. Cloud as an approach has gone from bleeding edge tech through to being part of the furniture for commodity services like email and web hosting.
Looking ahead, I see more use of cloud by businesses as they seek to control their ongoing costs and manage services more efficiently. However, I think the biggest changes are taking place between the different levels of IT within an organisation. CIOs are looking at cloud as a source of strategic advantage for the future, but there are still a lot of potential issues that have to be solved around the long-term management of the complex processes associated with cloud computing.
IT has a big job on its hands around keeping clouds running – the whole reason why cloud is proving popular is that the back-end systems are taken care of, becoming “somebody else’s problem”. For the company running its own private cloud, or taking a hybrid approach, that’s not the case. You still need to understand how everything interacts to deliver the service, and what you have to do to maintain that service even when there are problems. Essentially, the challenge remains the same, but the route taken is different; whether or not you’re outsourcing your cloud or controlling it internally, it will be more important than ever to have the right expertise to maintain operations. To gain all of the benefits of the cloud without increasing risk, specialised projects will require additional assistance and data protection will be priority.
Are we at a migration tipping point? Why is this?
Migrations are taking place because there are a lot of companies that elected to sweat their hardware for longer. There are lots of Microsoft Windows Server 2003 instances out there that are getting to end of life, for example. These will need to be replaced.
Alongside this, there are two other trends that are making migrations popular in 2015: first, that virtualization projects that are getting shifted; and second, the need for storage growth.
For many companies that implemented virtualization, their product and support contracts are coming up for renewal – and IT teams are taking advantage of this to reevaluate their options. The days of x86 virtualisation being equated with only one company are long over, but now is the time when migrations are seriously starting to take place at scale. In the Scandinavian region, some of the largest cloud implementations are moving over to Microsoft Hyper-V and Azure, which means that migration projects have to be designed and implemented.
There is also the continued growth of storage requirements within companies. Companies are holding on to huge amounts of data that they create over time, which is increasing the demand for more storage. Companies have to decide whether they stick with their existing approaches and simply add more space, move to different storage platforms, or implement forms of outsourced storage like cloud. Whichever route companies select, migration projects will be required.
What impact will all this change have on data centres?
To some extent, data centre design will continue on the same path that it has been for some time – more consolidation, tighter design requirements and the challenge of fitting more computing resources in the same space.
However, I think that this will also prompt more thinking around how companies deal with issues like continuity and disaster recovery. For established businesses in particular, there can be multiple different technologies all in the same place that have been chosen for their performance and workload capabilities. For example, you can have mainframes alongside standard x86 servers, or POWER systems alongside Windows, Linux and virtualized servers. The traditional approach to supporting those systems is still very much split into different protection tools that are a pain to manage. The result is too much cost.
For companies that have to continue with their existing hardware, it’s worth considering whether to consolidate protection methods or keep with what is currently in place. Alternatively, cloud and hosted DR strategies are becoming more popular, especially for older systems where there are fewer people with the necessary skills available. Handing over management and DR to third parties can help here, and will eventually lead to smaller data centres.
Looking into the future, I think we’ll continue to see critical applications hosted by the companies involved, but much more use of cloud for DR and recovery. Internal data centres may shift some third tier applications to the cloud, freeing up space for other services that can provide competitive advantage.
Are we looking at more complexity in the data centre, or less?
Cloud is more complicated at the back-end. For the service provider, thinking about multi-tenancy and shared storage is more complex than simply hosting one customer’s equipment separately. Being able to prove that multi-tenant environments are secure, and that they keep customer data segregated … that is only going to become more important.
The customer should be shielded from all that, though. For them, cloud should make things easier. It’s a compelling argument for shifting certain parts of operation into public cloud – that, while data centres continue to evolve and become more complex, new offerings can provide the simplicity companies seek. The right combination of cost efficiency, service and storage will need to be considered, and customers will benefit in the long run.
What do you think are the biggest reasons for change within companies?
Migration comes when you want to change. It can be in response to old equipment coming to the end of its lifespan, or in order to free up time and resources that can be put into more strategic projects. I think migration planning is an essential skill that companies have to work on alongside the more “people aspects” of change management. At the moment, many companies want to implement new approaches to technology, but they think that challenges like downtime or service loss will hold them back. Downtime can be removed from migrations in most cases with the right technology – whether it’s a cloud migration, a storage implementation or a shift to a new virtualization platform.
A lot of this goes back to how IT perceives its role within the business. IT is often a back-end function that is only about keeping the lights on. That role is critical – after all, downtime can cost huge amounts of money – but it’s not the only consideration for IT these days. Looking forward, IT has to show how it is about enabling the business to achieve better results. However, getting to this may involve a move to new hardware, new approaches and new thinking. IT has to avoid being its own worst enemy and remove the hurdle that downtime can represent.
According to our research, a single server can take between one hour and eight hours to migrate. Magnify this by hundreds or thousands of machines, and the costs can be huge, representing vast amounts of man-hours and expenses. It’s no wonder that these can give people pause. Reducing or eliminating the downtime involved in these moves not only represents an opportunity to strip out that cost, it can help the company get to its future destination faster. IT can take back that leadership role.