Sabermetrics for the data centre: How to play moneyball in IT

By TeamQuest Director of Market Development, Dave Wagner.

  • 10 years ago Posted in

The Sabermetrics Story
As both the book and film "Moneyball" demonstrated, the old ways of measuring baseball performance -- relying on a set of generally accepted metrics -- didn't necessarily dictate future success. "Moneyball" introduced many of us to the new concepts of sabermetrics, a way of measuring complex, and often misunderstood, data relationships. As those Oakland A’s teams, and more recently, the 2013 World Champion Boston Red Sox have proven, the paths to success are more complex than simply relying on simple statistics and measurements.


While it used to be generally accepted that a team with players having the highest batting average and fewest errors (and therefore, the most expensive) was a sure path to winning, this has proven to be faulty logic. Instead, teams today are being built by integrating holistic understandings and measurements that look to yield a result greater than the sum of the parts -- the concept of “sabermetrics.” It’s proven to be effective, and it can be applied to IT.


Moneyball for the Data Center
In today’s software defined data center environments (SDDC), a new sabermetrics-like approach to IT and business performance is afoot. “Moneyball” is continuous optimization, the drive to find and maintain a balance between cost controls and customer experience. Data Center environments are complex ecosystems, wherein every layer (network, storage, CPU, applications) generates performance data that can be fed into management applications to track efficiency, utilization, and ultimately costs. But can tracking these basic metrics answer the most important questions, or help make decisions in these complex environments? After all, huge piles of data aren’t easily consumable in any meaningful way. What tools do we use to transform all the infrastructure systems data into actionable business intelligence that we can use to accelerate optimization efforts and positively impact the bottom line and the end user experience?


Data centers and their clients, like every other enterprise, must do more with less. Budget-conscious clients demand more computing power and capacity for their data-driven products and services, simultaneously requiring ever more flexibility and scalability. Global resource scarcity and rising electrical power costs place further pressure on data centers to ruthlessly drive for ever increasing efficiency.


The rapid adoption of virtualized server, storage and networking technologies means that the underlying physical IT resources are largely abstracted from the business work they’re supporting. Traditional metrics track how busy or under-utilized any given system, array or network is; these data points no longer provide the answers required by top decision makers. A Forrester Research survey in 2012 found that for IT decision makers worldwide, their top priority is to “improve the use of data analytics to improve decision making.” Where the focus was once on resource utilization, the new driver is work efficiency and cost. Where availability was once considered a valuable data point, now we need to know what work has been accomplished. Application and service workloads are emphasized over simply measuring devices.


Just as in baseball or any other sport, a strategy of securing wins by spending the most money will ultimately fail. Victory goes to those who do the best analysis, the fastest. The first to find a way to measure and analyze what really matters can use that intelligence to make better decisions and set priorities more efficiently. In the art and science of Moneyball, this means using sabermetrics to understand that to win games, you need to find and sign the player with the most game-winning runs, not necessarily the player with the most runs. You want the player with the least game-losing errors, not the player with the least errors overall, and so forth.


The Sweet Spot and the Slump
So what new winning attributes are we looking for? In the data center game, previously under-appreciated data relationships are a lot like those previously under-appreciated baseball players. The sweet spot, or desired state, is maintaining optimized operations at the intersection of business performance - as delivered to the client - and IT efficiency. The goals driving the deployment of advanced analytics arise from emerging challenges in virtualized, “cloudy” and SDDC environments; global cost and energy efficiency, root cause analysis, global availability of services, and agility. Workload-centric optimization addresses these challenges in ways that traditional stack monitoring/management vendors cannot. Traditional metrics tools suffer from a lack of understanding of the relationship of dynamic technologies (virtualized everything) to the business services they are supporting, and, at the end of the day are still only measuring utilization - a poor proxy for actual work accomplished and the time it takes to complete.


This lack of insight has several negative consequences. Over-provisioning is incredibly costly and there’s no guarantee of winning. Under-provisioning causes service disruptions, leading to non-compliance and market risks (losing customers). When too many disparate tools and processes are deployed in silos, optimization processes are expensive and error-prone, leading again to service delivery impacts. Converged infrastructure vendors claim to answer interoperability issues, but being locked into these providers comes with higher costs and reduced agility. The whole idea behind virtualization and other “Web-scale” approaches (cloud technology and it’s growing lifecycle ecosystem) is to increase agility, so vendor dependence is an ironic case of backsliding.


Accelerate Optimization
It took a long time for sabermetrics to take hold in the tradition-bound business of baseball. The ideas behind Moneyball were conceived in the 1980s but didn’t come to fruition for almost 20 years. It’s not easy to jump into a new paradigm. How can IT overcome obstacles to accelerate optimization to keep up with the pace of business? IT departments are already stretched for time and resources, and have little expertise with business processes; many of their existing tools don’t accept business data. Where do we start and how do we prove the value that justifies the leap? How do we get some quick wins?


To begin, a bi-directional merging of business and IT processes and tools is necessary. IT systems data should be processed into business analytics and business data must be run through IT analytics. Once this alignment is achieved, the work of continuous optimization can begin in earnest. Following alignment of data, analytics begin to make sense of it, automation ensures that ongoing analysis happens in a timely, efficient, and scalable fashion, and proactivity uses analytic tools to prevent incidents and predict needs - before “the game is lost.”


Correlative and Predictive Analysis
Going forward, virtualization, cloud and SDDC management will place a premium on new management models that go way beyond simple, resource monitoring. Real-time data collection, embedded analytics, and the ability to span multiple data source domains intelligently are essential to managing the tremendous complexity and mutability of the modern data center. The two analytic approaches key to the next generation of management are correlative analysis and predictive analysis. Correlation links the business “metrics that matter” to the underlying metrics associated with IT resource support of the business in an ongoing fashion. To accomplish this, one must be able to continuously correlate a wide variety of disparate metrics (both IT resource performance and business related) to identify the causal relationships that underpin real world performance, throughput, response time, and cost. This necessarily implies the existence of a logical “data mart” of all appropriate metrics to feed the analysis.


Once these relationships are understood, you can begin to predict IT and business performance based on these relationships, combined with historical and current performance. There are a wide variety of predictive analytic approaches ranging from simple trending, through multiple types of statistical analysis, up to analytical predictive modeling. The curveball to consider: these predictions must not solely be based on simple trending or other linear analytic treatments. Rather, they must factor in the complexities associated with contention for resources—the inevitable “traffic jam” of dynamic workloads contending for shared IT resources. This contention relates to the underlying physical infrastructure actually processing the work. When infrastructure is insufficient to meet dynamic workload, performance and response time degrades rapidly and in a nonlinear (i.e., difficult to understand) fashion.


As we’ve established, IT configurations are now dynamically changing in response to business needs at a speed and frequency beyond the capacity of existing monitoring and management tools to handle. This change affects the traditional management stack, creating a very big and constantly mutating data problem. This reinforces the need for better analytics that have access to all the appropriate metrics: IT technical resource performance, IT configuration and asset, financial costing, business service performance and end-user experience. The new analytics capabilities will enable businesses to balance their ruthless drive toward cost efficiency with end-user experience and customer satisfaction.


Balancing Efficiency and Customer Satisfaction
Continuous optimization driven by advanced analytics has powerful results: significant reductions in initial capital expenditure as well as ongoing operational expenditures. Make, and keep making, more money…that’s a solid win on the efficiency side. Optimizing IT resources for the business systems that engage customers is a top priority and is increasingly seen as more important than back-office processes. The performance of services and applications that customers engage with determines end user experience, and thus customer satisfaction, leading to market share and more wins for the bottom line and the brand (be it commercial or government). Enterprises running on intelligent, optimized IT foundations can respond faster to business spikes and prevent business-impacting outages and slowdowns, again maximizing efficiency and protecting customer experience.


Furthermore, optimized data centers can deploy and refresh new applications faster. This is a major success factor for many sectors, as the rapid evolution of business models becomes the new norm.


The intersection of financial efficiency and customer service has always been the domain of analytic processes that continuously balance resource costs and performance over time. Previously, these processes, like the technology they measured, were more linear in nature. Now the discipline must adapt itself to a more holistic view of the game, applying new vectors of analysis to complex IT infrastructure and the ever-changing business systems that it serves.
It’s time to play Moneyball!
 

Exos X20 and IronWolf Pro 20TB CMR-based HDDs help organizations maximize the value of data.
Quest Software has signed a definitive agreement with Clearlake Capital Group, L.P. (together with...
Infinidat has achieved significant milestones in an aggressive expansion of its channel...
Collaboration will safeguard HPC storage systems and customer data with Panasas hardware-based...
Peraton, a leading mission capability integrator and transformative enterprise IT provider, has...
Helping customers plan for software failure, data loss and downtime.
Cloud Computing and Disaster Recovery specialist, virtualDCS has been named as the first UK-based...
SharePlex 10.1.2 enables customers to move data in near real-time to MySQL and PostgreSQL.