Efficient monitoring: how to realise a return on investment from container use

Containerised applications are fast becoming an established fact in the IT infrastructure of global organisations. By John Rakowski, vice president of strategy at LogicMonitor.

  • 3 years ago Posted in

According to a Gartner survey[1], by 2022, more than 75 percent of companies will be running containerised applications in production – a sweeping uptake from the fewer than 30 percent that do so today. While containers can help teams gain scalability, flexibility and ultimately, delivery speed, they also create a lot of complexity as applications and associated infrastructure becomes more distributed. It should also be noted that if an organisation uses containers, it will also likely use an orchestration tool for the deployment and management of containers, such as Kubernetes. As such, it is important that DevOps teams have monitoring in place to increase visibility, being able to seamlessly spot performance or availability issues that originate in containers and traditional infrastructure architecture, in order to prevent business problems. To ensure this, there are a few pivotal rules to follow to maximise container investment.

 

Monitor apps alongside infrastructure

 

For traditional infrastructure, the ideal practice is to monitor server performance metrics and the health of the application running on it, including trace or calls made to other components. However, the work of IT teams is frustrated by the exponential layers of complexity added by Kubernetes. The IT team must undertake the daunting task of not only monitoring the server and application, but also the health of the containers, pods, nodes and the Kubernetes control plane itself.

 

To maximise the return on investment (ROI) of container investments, the monitoring of the Kubernetes control pane and master components is highly important. Unhealthy components lead to issues in the scheduling of workloads and this can directly impact scalability and flexibility benefits, plus the running of business applications. When these applications fail, it can put serious strain on an organisation’s service level agreement (SLAs), customer commitments and the overall brand.

 

Beyond the monitoring of components, a keen eye must be kept on the overall business service being delivered. This multi-leveled monitoring necessitates a tool that is not just capable of monitoring containerised applications and all elements of infrastructure, but can easily roll up information to an overall service view. This ensures that DevOps and IT support teams have holistic context into how components link together and how any emerging issue impacts related processes. 

 

Monitor at the service level

 

The underlying infrastructure of applications is ever-growing in complexity. This being the case, it is important that an organisation’s IT team prioritises the applications and services critical to business functionality. To help guarantee a clear perspective of networking infrastructure, IT teams must not be too focused upon the individual container view – after all, if one specific container has raised an alert, this does not necessarily mean multiple other containers are failing. It may be that, despite the alert, business services have not been negatively impacted.

 

An effective method to maximise ROI by staying focused upon what is integral to business functionality is by identifying and monitoring key performance indicators (KPIs) across different containers. This will provide an overall service or application level view and a telling perspective of how your applications are performing.

 

Automated monitoring saves time

 

When using Kubernetes, containerised workloads are scheduled across nodes in the cluster so that resources are used efficiently to meet workload requirements. The process of manually adding containers and pods to and from monitoring is time consuming, inefficient and, simply put, unrealistic. The oft recycled truism ‘time equals money’ is certainly correct when an organisation’s IT team is stuck with a monitoring solution that requires manual changes. In this scenario, teams are faced with numerous tasks such as adding monitoring agents, configuring metrics to be collected, and even specifying when alerts should be triggered by changing thresholds in order so that they reflect the needs of the business service.

 

Underlining the need for automation, is the fact that the resources themselves can be short-lived to begin with. Sysdig’s 2018 Docker report[2] demonstrated the ephemeral natures of containers by finding that 95 percent of containers live less than a week, 85 percent live less than a day and 74 percent live less than an hour, while 27 percent live between five and 10 minutes, and 11 percent live less than 10 seconds.

 

These fleeting lifespans are not necessarily a drawback – that is why companies choose to implement containers. However, to maximise ROI, it is imperative that IT teams automate container monitoring, including automatically adding and deleting cluster resources to be monitored, in order to reduce manual effort involved.

 

A unified view of monitored resources is essential  

 

Companies that use Kubernetes are often complex in infrastructure – they may operate both in the cloud and on-premise, while using containers as a unifying layer to standardise application management and deployment. To effectively manage such labyrinthine systems, it is essential to have unified visibility of business services and their underlying infrastructure components, including containers and traditional components. But more importantly, this visibility must provide automatic context, e.g. if an issue in a container starts to arise, then the impact on other pertinent infrastructure components and the overall business service is made known.

It is a fact of modern IT infrastructure that diverse environments are connected, so without a unified view, it can be immensely challenging to troubleshoot issues that transcend environments. Given this complexity, monitoring tools must go beyond only monitoring what has happened, to understanding why – a unified intelligence approach that helps users remain proactive when encountering these challenges.

Effective monitoring ensures ROI on container investments

Containerised applications can be a useful tool in the employ of IT teams, offering scalability, flexibility and delivery speeds. However, their utility is often matched by the complexity that they, and the Kubernetes orchestration tool, bring to infrastructure. To maximise ROI, effective monitoring is all but essential. The ideal means of doing this is by following these few pivotal rules, and when these are followed, an organisation can truly make the most out of containerised applications.    

 

Bernd Greifeneder, founder and CTO of Dynatrace., looks ahead to 2022, predicting some key trends...
By Jonathan Wiener - Chief Revenue Officer at Aurachain.
Richard Jeffery, Group Chief Executive Officer at ActiveOps talks about why it is vital that...
What does it mean to be successful as an IT leader? As technology has evolved, so have the skills...
Cloud computing has become the lifeblood of enterprises operating in the digital era. However,...
Florian Douetteau, CEO of AI and machine learning platform Dataiku, discusses how code-free...
Until recently, managing network issues meant helping users face-to-face or huddling in a...
By Bas Lemmens, VP Sales & GM EMEA, VMware Tanzu .