The Holy Grail of datacentre TCO: understanding the cost of delivering a service

By Zahl Limbuwala, CEO, Romonet.

Throughout the history of the datacentre, understanding the true costs behind the technology has been an uphill struggle. eBay and Facebook are both taking steps to understand datacentre cost and efficiency, which shows that the struggle is gaining momentum. However, two massive multinational corporations discussing datacentre cost does not necessarily mean much to the average datacentre owner: most organisations can’t command the budget to create bespoke software for managing their datacentres. Yet unless organisations of all sizes and at all levels understand the Total Cost of Ownership of their IT estate, they cannot begin to understand the real costs of delivering a service and thus understand if they are getting good value for money from each.


Lacking understanding
The increasing worldwide demand for IT, whether for business or consumer services, has resulted in more and more datacentres springing up worldwide; whether they provide in-house IT services for a single organisation, or cloud-based services for a number of customers. However, a number of factors are combining into a single tipping point that could undermine these services. First, IT services are increasingly becoming a commodity; meaning that users expect lower and lower costs regardless of who provides the service. Second, IT budgets are coming under greater scrutiny due to the fact that they are still growing as a proportion of overall spend, meaning that IT departments need to justify more and more their expenditure to the CFO. Third, the pace of technological change has been so rapid that the demands placed on datacentres themselves have changed over the past decade; meaning that almost continual investment is needed to maintain pace with ever increasing IT performance. Without understanding these factors, datacentre owners will become uncompetitive and lose business to those that understand and can control their costs.


The financial cost of bad IT decisions
The increased reliance on IT means it is taking up more of business budgets. With this increased spend has come a need for additional investment as well as the necessary skills to ensure the business’ IT all runs smoothly. This is all fine, except for the fact that at present most organisations are unable to precisely identify exactly the true and differential cost of each business activity that IT supports or enables. As long as the company as a whole is profitable, many see no need to identify which of their IT services is consuming the most resources and where money could be allocated for a better return. However, this can cause severe headaches for the CFO and for the business down the line: particularly if that macro level profitability drops with little indication of where the issues lie or why. Other areas of the business are beginning to take note of this: one recent trend we have seen is that facilities departments are no longer willing to pay the energy bill for IT and are asking for it to be allocated to the IT budget. Essentially, datacentre and IT spend in general can no longer be seen as a single unrelated cost separate from the rest of the organisation, but as an integral part of the overall budget and strategy.


Understanding the TCO of a datacentre, and how each service either positively or negatively contributes to the overall margin, is key to ensuring that the business continues to run profitably as a whole; since it is only in this way that businesses can truly understand and manage the relationship between cost of delivering a service and its revenue – margin management in effect. Additionally, until recently datacentre owners have no easy way of predicting the costs of their datacentre. While measurements like Power Usage Effectiveness (PuE) and other data from metering have helped datacentre operators understand and improve efficiency, they give no real understanding of how their actions relate to TCO.


When datacentre owners are considering expanding or modernising existing operations, the IT department has to justify new spend to a CFO, who may well be sceptical thanks to previous projections falling short or being immeasurable and therefore eroding confidence in any Return On Investment. The problem is that without the right tools, accurate prediction is impossible, meaning organisations typically have little confidence in how a datacentre will perform until they have already built it. Numerous factors need to be considered such as: ambient temperature; the distance to the local power supply; energy costs; taxes and other incentives; data centre design, as well as the hardware running inside. For example, a datacentre being built in Norway will have wildly different factors influencing its TCO than one built in Texas.


The Holy Grail of Datacentre TCO is in sight
Datacentre owners and managers must be able to accurately predict performance of datacentres to justify any future financial decisions: this is where predictive modelling comes in. While organisations can try and measure how datacentres are performing based on analysis of historic metering data, predictive modelling is more concerned with how datacentres ‘should’ actually be performing as designed or configured. By modelling datacentre performance based on the important variables, organisations can understand how current performance stands in relation to the goals of their business, and what they need to do to gain the most from their investment. Organisations can also predict the performance of datacentres while they are still on the drawing board and say with certainty how much it will cost the business to operate; before they have allocated budget or committed to any construction. To get long-term value out of IT investments, businesses need to optimise all aspects of the datacentre estate as a system rather than just a collection of individual components. By understanding how the complete system is supposed to be operating, datacentre owners and operators can predict the actual cost of the services they provide, the impact on cost of operational decisions while eliminating uncertainty in the process.


The datacentre of the future
With the right tools organisations can identify how and where to optimise performance at system level, predict the impact of changes based on a high confidence model, and understand the marginal and differential costs of delivering a service. Real understanding of TCO cannot be achieved by reactive methods alone, such as metering. Organisations need to take proactive steps towards understanding the cost of operating their datacentres and how this relates to TCO. As the commoditisation of IT continues and the datacentre market following the same path, the businesses that survive will be the ones that can make smart, high confidence decisions and know what the impact will be on the wider operating cost and margin.

First of its kind research, in partnership with Canalys, offers deep insights into some of the...
According to a recently published report from Dell’Oro Group, worldwide data center capex is...
Managed service providers (MSPs) are increasing their spending by as much as 70% to meet growing...
Coromatic, part of the E.ON group and the leading provider of robust critical infrastructure...
Datto’s Global State of the MSP: Trends and Forecasts for 2024 underscores the importance of...
Park Place Technologies has appointed Ian Anderson as Senior Director, Channel Sales, EMEA.
Node4 has passed the ISO 27017 and ISO 27018 audits, reinforcing its dedication to data security,...
Park Place Technologies has acquired Xuper Limited, an IT solutions provider based in Derby, UK.