Keep on security running - why compliance has to be continuous

By Nathan Collins, Regional Vice President EMEA, NetAlly.

The cost for security breaches continues to rise. According to IBM’s Cost of a Data Breach Report for 2024, the average cost for a breach was $4.88 million, a rise of 10 percent year on year. With so much at stake, it should not be a surprise that governments are closely scrutinising their regulations around security. The European Union has introduced the NIS2 Directive to update guidelines for all organisations deemed to provide critical infrastructure across 18 sectors, from the usual suspects like banks, retailers, healthcare and utilities through to IT and cloud services providers that underpin digital business processes.

The updated NIS2 Directive supports organisations to enhance their cybersecurity capabilities, develop their risk management approach and have reliable reporting in place around those risks. At the same time, it sets rules for better cooperation and information sharing between organisations. The overall goal is to raise the quality of security and risk management across the board.

What makes compliance difficult

Meeting the needs of regulation like NIS2 should be high on the priority list for companies. However, the amount of work needed to get compliant should not be underestimated. When security is so hard to achieve in the first place, let alone to keep in place, how can teams improve their performance? We need to look at the obstacles we face and evaluate whether our current approaches are the right ones.

First of all, it is important to recognise how tough it can be to know with confidence what assets or endpoints you have across your environment. The exponential growth of edge-connected devices (headless Internet of Things, operational technology, industrial controls, and other unmanaged assets) has added to the burden that exists for IT security and operations teams.

All those devices must be connected to the corporate network to work effectively. Without the proper security processes in place, those environments can have unsecured connections in place, which can pose risks. The ubiquitous nature of that connectivity can also be a challenge, as attackers can use unsecured assets to reach locations on the network that would otherwise be secure.

Alongside this, the network architecture is becoming increasingly complex to manage over time. Any device on the network - and the network itself - can have misconfigurations due to human error that can lead to gaps in security. To stay resilient and comply with NIS2, you must find those oversights and ensure you stay ahead of any problems.

Where does your data come from?

You must identify what is currently installed to address these challenges and maintain network security. After this, you have to keep that inventory up to date over time. The challenge is that all the tools typically used to piece together this inventory can miss out on discovery of all endpoints. 

Traditionally, security teams rely on vulnerability management (VM) tools to understand what assets they have installed and - most importantly - what issues need to be addressed. However, while VM tools can theoretically provide this insight, the discovery process can easily break down. The further away that assets are from the central starting point for VM discovery, the more likely they are to be missed, resulting in individual assets or even entire network segments being overlooked. 

This can be due to the network architecture itself - networking techniques such as asymmetric routing, Network Address Translation (NAT) set-ups configurations or hub-and-spoke design topologies can lead to missed assets, as can firewall settings. Similarly, network media converters can cause un-discovered paths. Common misconfiguration examples include putting switches in the wrong VLAN so they don’t have an IP address in the VLAN segment under test. This leads to a VLAN mismatch going out so those assets on that segment will not respond to a broadcast.

Alongside VM tools, network management products can provide insight into all the infrastructure in place across the network, from switches, routers and firewalls through to Wi-Fi Access Points. Typically, these products work by periodically collecting data from all the devices on the network using SNMP, packet sniffing, flow data, syslog, APIs, or agents. These network management tools can also be configured to alert on configuration changes, so that any significant change is flagged automatically. 

While network management tools can communicate with network infrastructure elements, they frequently cannot discover all the endpoint devices that are on the network. In addition, they can be difficult to set up and configure, requiring specialized knowledge and training to use effectively, and frequently generate false alarms. This makes compliance harder over time.

Alongside VM and network management products, security teams frequently rely on their endpoint management tools to provide that level of visibility. Typical products used include network access control (NAC) tools, managed detection and response (MDR) solutions and endpoint profiling tools. These tools vary - some deploy agents out to the endpoints to provide information back, while others use network traffic analysis to passively view endpoint traffic or flow. 

Looking at endpoint management alone can be a challenge, especially when it comes to defining what an endpoint actually is. Common devices like PCs, tablets and servers can have agents installed on them, but how about other devices that would be installed in edge environments? Operational technology systems like industrial control systems and headless IoT devices can’t support agents, and other items like IP cameras, building control systems and sensors can also exist at the edge. Those systems should be tracked and kept secure just like a traditional endpoint asset. Any device without that agent can create a blind spot that malicious actors could exploit.

There can also be a cost and complexity element to this side. For instance, endpoint management tools can be a lot more expensive to procure and cumbersome to implement due to the requirement to span or tap ports in order to work. Additionally, the amount of data generated by these solutions must be saved, stored, and analysed over time, which adds another layer of overhead.

How to plan ahead

If you can’t get a full picture of what you have, you can’t ensure it is secure and resilient. To build an effective approach to security and compliance, you have to start with a complete asset inventory. Using a combination of tools, you can get that accurate inventory in place that will be the basis for your long-term planning. 

One consideration is that, while a centralised inventory can be carried out successfully, there is no substitute for getting out to the edge and carrying out testing within each location. Local network testing can corroborate your approach, but also find additional networks or connected devices that have to be brought into scope. This local testing should be a regular part of your strategy so that you can keep your security and compliance models up to date with your real-world environment.

Compliance frameworks like NIS2 provide effective guides for security and resilience. At the same time, they generate additional work for security and networking teams to manage. By understanding the gaps that can exist in asset programmes, you can reduce the potential for gaps in your planning and prevent issues before they arise. More importantly, you can make the compliance process easier and prove that you are following those best practices.

Client: NetAlly

Theme: Security and compliance

Publication: Digitalisation World

Editor: Philip Alsop

Speaker: Nathan Collins, Regional Vice President EMEA, NetAlly

Deadline: 30th January 2025

Suggested title: Keep on security running - why compliance has to be continuous

The cost for security breaches continues to rise. According to IBM’s Cost of a Data Breach Report for 2024, the average cost for a breach was $4.88 million, a rise of 10 percent year on year. With so much at stake, it should not be a surprise that governments are closely scrutinising their regulations around security. The European Union has introduced the NIS2 Directive to update guidelines for all organisations deemed to provide critical infrastructure across 18 sectors, from the usual suspects like banks, retailers, healthcare and utilities through to IT and cloud services providers that underpin digital business processes.

The updated NIS2 Directive supports organisations to enhance their cybersecurity capabilities, develop their risk management approach and have reliable reporting in place around those risks. At the same time, it sets rules for better cooperation and information sharing between organisations. The overall goal is to raise the quality of security and risk management across the board.

What makes compliance difficult

Meeting the needs of regulation like NIS2 should be high on the priority list for companies. However, the amount of work needed to get compliant should not be underestimated. When security is so hard to achieve in the first place, let alone to keep in place, how can teams improve their performance? We need to look at the obstacles we face and evaluate whether our current approaches are the right ones.

First of all, it is important to recognise how tough it can be to know with confidence what assets or endpoints you have across your environment. The exponential growth of edge-connected devices (headless Internet of Things, operational technology, industrial controls, and other unmanaged assets) has added to the burden that exists for IT security and operations teams.

All those devices must be connected to the corporate network to work effectively. Without the proper security processes in place, those environments can have unsecured connections in place, which can pose risks. The ubiquitous nature of that connectivity can also be a challenge, as attackers can use unsecured assets to reach locations on the network that would otherwise be secure.

Alongside this, the network architecture is becoming increasingly complex to manage over time. Any device on the network - and the network itself - can have misconfigurations due to human error that can lead to gaps in security. To stay resilient and comply with NIS2, you must find those oversights and ensure you stay ahead of any problems.

Where does your data come from?

You must identify what is currently installed to address these challenges and maintain network security. After this, you have to keep that inventory up to date over time. The challenge is that all the tools typically used to piece together this inventory can miss out on discovery of all endpoints. 

Traditionally, security teams rely on vulnerability management (VM) tools to understand what assets they have installed and - most importantly - what issues need to be addressed. However, while VM tools can theoretically provide this insight, the discovery process can easily break down. The further away that assets are from the central starting point for VM discovery, the more likely they are to be missed, resulting in individual assets or even entire network segments being overlooked. 

This can be due to the network architecture itself - networking techniques such as asymmetric routing, Network Address Translation (NAT) set-ups configurations or hub-and-spoke design topologies can lead to missed assets, as can firewall settings. Similarly, network media converters can cause un-discovered paths. Common misconfiguration examples include putting switches in the wrong VLAN so they don’t have an IP address in the VLAN segment under test. This leads to a VLAN mismatch going out so those assets on that segment will not respond to a broadcast.

Alongside VM tools, network management products can provide insight into all the infrastructure in place across the network, from switches, routers and firewalls through to Wi-Fi Access Points. Typically, these products work by periodically collecting data from all the devices on the network using SNMP, packet sniffing, flow data, syslog, APIs, or agents. These network management tools can also be configured to alert on configuration changes, so that any significant change is flagged automatically. 

While network management tools can communicate with network infrastructure elements, they frequently cannot discover all the endpoint devices that are on the network. In addition, they can be difficult to set up and configure, requiring specialized knowledge and training to use effectively, and frequently generate false alarms. This makes compliance harder over time.

Alongside VM and network management products, security teams frequently rely on their endpoint management tools to provide that level of visibility. Typical products used include network access control (NAC) tools, managed detection and response (MDR) solutions and endpoint profiling tools. These tools vary - some deploy agents out to the endpoints to provide information back, while others use network traffic analysis to passively view endpoint traffic or flow. 

Looking at endpoint management alone can be a challenge, especially when it comes to defining what an endpoint actually is. Common devices like PCs, tablets and servers can have agents installed on them, but how about other devices that would be installed in edge environments? Operational technology systems like industrial control systems and headless IoT devices can’t support agents, and other items like IP cameras, building control systems and sensors can also exist at the edge. Those systems should be tracked and kept secure just like a traditional endpoint asset. Any device without that agent can create a blind spot that malicious actors could exploit.

There can also be a cost and complexity element to this side. For instance, endpoint management tools can be a lot more expensive to procure and cumbersome to implement due to the requirement to span or tap ports in order to work. Additionally, the amount of data generated by these solutions must be saved, stored, and analysed over time, which adds another layer of overhead.

How to plan ahead

If you can’t get a full picture of what you have, you can’t ensure it is secure and resilient. To build an effective approach to security and compliance, you have to start with a complete asset inventory. Using a combination of tools, you can get that accurate inventory in place that will be the basis for your long-term planning. 

One consideration is that, while a centralised inventory can be carried out successfully, there is no substitute for getting out to the edge and carrying out testing within each location. Local network testing can corroborate your approach, but also find additional networks or connected devices that have to be brought into scope. This local testing should be a regular part of your strategy so that you can keep your security and compliance models up to date with your real-world environment.

Compliance frameworks like NIS2 provide effective guides for security and resilience. At the same time, they generate additional work for security and networking teams to manage. By understanding the gaps that can exist in asset programmes, you can reduce the potential for gaps in your planning and prevent issues before they arise. More importantly, you can make the compliance process easier and prove that you are following those best practices.

Jonathan Whitley, Regional Vice President for Northern Europe at WatchGuard discusses how an MSP...
By Andrew Grealy, Head of Armis Labs, and Michael Freeman, Head of Threat Intelligence.
BY Crystal Morin, cybersecurity strategist at Sysdig
By Matt Middleton-Leal, Managing Director Northern Europe, Qualys.
By Alasdair Anderson, VP of EMEA at Protegrity.
By Eric Herzog, Chief Marketing Officer, Infinidat.