Skip to main content

Having Zero Trust in the cloud

The Zero Trust framework provides high-level principles to be applied to an enterprise architecture in order to protect its workloads and users. Leveraging cloud computing resources for public Cloud Service Providers (CSPs) is increasingly becoming a standard within an organization’s IT architecture. With the advent of cloud computing and the current adopting pace of public cloud services within organizations, traditional security practices need to be tailored to this new way of developing and deploying business applications. This article sets out to provide cloud professionals and organizations, who are growing their (public) cloud footprint, with tangible approaches to boost their organizations security practice through Cloud Native solutions.


Zero trust is a security concept that assumes that any user, device, or system attempting to access a network or (cloud) resources should not be trusted by default and must be verified before being granted access ([Rose20]). While trust in everyday life has been studied extensively for centuries, the formalizing and application of what trust and zero trust entails from a computation perspective came from S.P. Marsh written in 1994. In the early 2000s, security forums and researchers understood the challenges and limitations of perimeter-only defense and generally referred to multi-layer perimeter defense to overcome these challenges. Around 2009, the true concept of zero trust architecture, advocating for the need for a stricter and, seemly contradictory, more open security and access approach within an organization’s network, began to take shape; especially through research conducted by Forrester analysts. In 2014, Google implemented Zero Trust architecture as part of its BeyondCorp platform ([Goog]); enabling employees to access applications hosted in the cloud or on-premises from anywhere without leveraging traditional VPN services. Around the same timeframe, Microsoft increased focus on the “Assume Breach” element of the Zero Trust framework.

The Zero Trust framework is especially suitable in the context of (public) cloud computing and the continuous pivot from privately owned datacenters and infrastructure to rapidly deployable cloud services and abstracting application code away from underlying infrastructure. While traditional, (typically on-premises) security measures certainly apply to cloud workloads, a slight shift in the paradigm is required. In order to maximize the business value of cloud workloads and keep time-to-production low, application teams prefer to leverage the latest services that Cloud Service Providers (CSPs) have to offer. Depending on the maturity of an organizations’ cloud practice and the high rate at which (new) cloud services are developed, misconfiguration of cloud service and level of cloud operator access to tenant data remains a thread. Principles introduced by the Zero Trust framework, like isolation and inherent distrust between workloads, can help to maintain a security baseline without significantly impacting innovation and further adoption of (new) cloud services.

Where securing the cloud platform is one of the key aspects, it is also vital that end users in a company have a good understanding of security related risks. Security awareness, through initiatives like foxhunts and gamedays, are an important component of a Zero Trust strategy for cloud environments. These types of initiatives help to educate employees about the risks and threats they may encounter in the cloud, as well as the actions they can take to protect the organization’s assets.

The remainder of the article will dive into several aspects of the Zero Trust that are often adopted and how these could be applied in the context of (public) cloud.

Cloud Security First

A Cloud Security First (CSF) strategy is a holistic paradigm to securing cloud environments, which prioritizes security measures above all else. The goal of a CSF strategy is to establish a comprehensive and holistic security mentality that covers all aspects of the cloud environment and integrated into all software development practice. Key components of the CSF strategy are understanding the shared security responsibility model with CSPs, assessing your current security posture, understanding your cloud architecture, implementing security controls, encouraging a culture of security first. Especially the latter cannot be overlooked in the context of a fast-growing set of managed services from CSPs and features provided by new Infrastructure as Code (IaC) frameworks.

Where CSF focusses on adopting and cultivating a mindset of security first, Zero Trust provides a more practical application of enforcing security through trust mechanisms. CSF touches every aspect of security awareness in your organization; everything from detection and response to security awareness in business operations.

Security awareness tends to impact almost every aspect of organizations today: organizational structure, roles, resources, policies, strategy and operations. The Zero Trust framework provides a more tangible approach to tackling security in modern (cloud) architectures. Therefore, in the remainder of this article it will be used as a guide to highlight several different security practices, which are essential for deploying Zero Trust in your cloud environment. The most important aspect is operating from the assumption that your cloud environment is already breached.

Assume you are breached

One of the foundational principles of Zero Trust is to assume that any system or network, no matter how secure it is, can be breached. It is based on the idea that organizations should not trust any user, device, or system on the network until they have been properly authenticated and authorized. In such an environment, all network traffic is treated as untrusted and is subject to strict security controls, such as multi-factor authentication, network segmentation, and micro-segmentation. This helps to prevent attackers from moving laterally across the network once they have breached a single system.

The model also emphasizes the need for continuous monitoring and real-time threat detection to quickly identify and respond to security incidents. Security Automation is the essential keyword in this area. While traditional Security Incident and Event Management (SIEM) solutions are well versed in collecting and consolidating information for security specialist, traditional human incident detection and resolution is not scalable in an ever growing and complex enterprise cloud ecosystem. Therefore, to be prepared for breaches and respond within seconds, the development of security automation solutions should be a priority, either commercially off the shelve or self-developed based on cloud-native services. To allow automation of security responses, many cloud security (native) services already provide a level of integration with cloud application – and data services. It is therefore important that during design time engineers and architects (from different disciplines) collaborate to consider safeguards and security controls upfront – i.e. shift the level of security concerns and reuse central security solutions. A starting point for this could be to define your infrastructure security posture in the context of an “outside-in” and “inside-out” defense approach.

Outside-in vs inside-out

An outside-in approach, also known as perimeter-based security, focuses on securing the boundaries of the cloud environment. This includes implementing security controls, such as firewalls and intrusion detection/prevention systems (IDS/IPS), which will be discussed further on, to protect against external threats. The goal of an outside-in approach is to prevent unauthorized access to the cloud environment.

An inside-out approach, on the other hand, focuses on securing the resources within the cloud environment. This includes implementing security controls such as access controls (IAM), data control access and data encryption to protect against internal threats. The goal of an inside-out approach is to prevent data breaches (e.g. exfiltration) and unauthorized access to cloud resources.

A Zero Trust strategy typically employs both outside-in and inside-out approaches, as they complement each other. Namely, implementing security controls, such as firewalls and IDS/IPS, ensures perimeter protection of the cloud environment, while implementing access controls and data encryption security controls aim to protect data and resources within the cloud environment.

Before looking at the IDS/IPS aspect, a suitable next step is to understand data flows and network connectivity requirements for your cloud environment, which are commonly categorized into two types of traffic: ingress and egress.

The flow of network traffic

Ingress and egress refer to the flow of (network) traffic into and out of a network or system. The Zero Trust model assumes that any system or network, no matter how secure it is, can be breached. As such, it emphasizes the need for strict security controls to protect against unauthorized access, regardless of whether the traffic is incoming or outgoing.

For ingress traffic, a Zero Trust model would involve verifying the identity of the source of network traffic and ensuring that it is authorized to access the network or system. This can be achieved by implementing multi-factor authentication (MFA) and using security protocols such as Transport Layer Security (TLS) (or also known from its predecessor Secure Sockets Layer (SSL)) to encrypt the traffic.

Additionally, traffic coming in could be inspected and analyzed by security devices such as firewalls, Intrusion Prevention Systems (IPS) or Next-generation firewalls (NGFWs), which would be able to identify and block malicious traffic. Also, the use of a demilitarized zone (DMZ) could strengthen the security posture for incoming traffic. DMZs will be further discussed in the next paragraph.

For egress traffic, a Zero Trust model would involve ensuring that traffic is being sent to authorized destinations, and that the data is protected from exfiltration. This can be achieved by implementing encryption, monitoring for data leakage, and using security protocols, such as TLS or SSL, to encrypt the traffic.

Network segmentation is a powerful tool for ensuring that both ingress and egress traffic is properly controlled in a Zero Trust environment. This involves dividing the network into smaller, more manageable segments and implementing security controls on each segment. This can help to prevent attackers from moving laterally across your cloud network once attackers have identified a vulnerable entry-point into your cloud environment.

DMZ, an extra layer of security?

A demilitarized zone (DMZ) is a security architecture commonly used to provide an additional layer of protection in cloud environments. This pattern originates from on-premises solutions and is still commonly used to add an extra layer of security to cloud environments. However, from a pure Zero Trust perspective, a DMZ is not required, as Zero Trust does not solely rely on network-level security measures and perimeter defense.

A DMZ is typically implemented as a separate network segment that sits between an organization’s internal network and the Internet. The DMZ serves as a buffer zone, where incoming traffic is screened and filtered before it is allowed to access the internal network.

In a cloud environment, a DMZ can be implemented to protect cloud-based resources such as servers, databases and applications. For example, public-facing (web) servers can be placed in the DMZ, while sensitive data is stored on servers located in the internal network.

The DMZ can also include security controls such as firewalls, intrusion detection/prevention systems (IDS/IPS) and load balancers to provide protection against various types of cyber-attacks.

By implementing a DMZ in a cloud environment, organizations can:

  • reduce the attack surface of their cloud infrastructure by limiting access to only necessary resources;
  • improve security by placing security devices in the DMZ;
  • segment the internal and external network;
  • isolate public-facing resources to prevent unauthorized access of sensitive data;
  • enable better incident response and forensic analysis.

Implementing a DMZ in a cloud environment can help organizations to better protect their cloud resources and ensure that only authorized access is allowed. CSPs often provide several cloud-native services to help you provision security solutions in your DMZ.

IDS/IPS: essential security controls

An Intrusion Detection System (IDS) and an Intrusion Prevention System (IPS) are key components of a Zero Trust strategy for cloud environments. While newer concepts, such as Endpoint Detection and Response (EDR), Extended Detection and Response (XDR) and Network Detection and Response (NDR) are being introduced by 3rd party solutions, cloud-native services for IDS/IPS are still widely relied on.

An IDS is a security solution that monitors network traffic and identifies potential security threats, for example, unauthorized access attempts or malicious activity. An IDS can be configured to alert security teams of potential threats and may also be configured to take automated actions such as blocking the traffic.

An IPS, on the other hand, is a security solution that monitors network traffic and actively blocks or mitigates security threats. It can be considered as an advanced version of IDS. It examines network traffic in real-time and takes action to prevent malicious packets from reaching their intended targets.

When implemented in a cloud environment, an IDS/IPS can provide several key benefits, such as:

  • monitoring and protecting cloud resources against various types of cyberattacks;
  • real-time threat detection, alerting and blocking;
  • analyzing the traffic and identifying malicious traffic;
  • providing detailed information on security incidents to the Security Operations Centre (SOC) team;
  • helping to prevent data breaches and unauthorized access to cloud resources.

These solutions are the vital part of a secure cloud environment. However, large segments of traditional security solutions rely on human operation and control. In some cases, extra layers of protection can be implemented as automation. The Assume Breach mindset already touched on the need to consider security controls from the outset and throughout the design and development of your cloud environment. In practice this means that automated security controls are essential for a secure cloud environment.

Automation plays a critical role

In a cloud-based environment, automation plays a vital role in deployment of solutions, auto-healing and other tasks where traditionally human intervention was required. People are still fallible, and that fallibility can cause data breaches. Misconfiguration is responsible for 10% of security breaches ([Veri22]).

From a Zero Trust perspective, automation will play an even more critical role in enforcing security policies and maintaining the integrity of a cloud network. Automation can be applied in virtually all security control scenarios in a cloud environment:

  • Verify the identity of users and devices. Automated systems can verify the identity of users and devices through multi-factor authentication. Multi-factor authentication refers to aspects of a user to be tested to verify their identity. In a multi-factor this includes multiple combinations of something the user knows (e.g., a password), something the user has (e.g., a hardware token), and something the user is (e.g., a fingerprint).
  • Monitor network activity. Automated systems can continuously monitor network activity for unusual or suspicious behavior. For example, machine learning algorithms can be trained to detect anomalies in network traffic, such as excessive data exfiltration or unauthorized access attempts.
  • Enforce security policies. Automated systems can be used to enforce security policies, such as firewalls and access controls, in real-time. For example, an automated system could automatically block traffic from a known malicious IP address.
  • Continuous discovery and monitoring on infrastructure, if using Infrastructure as a service, automation could work on discovering resources, and continuous monitoring to ensure only authorized access is granted and to detect any misconfigurations in the infrastructure.
  • Scanning solutions minimizes your security risks and accelerates the remediation process by comparing cloud application configurations to compliance policies to identify gaps quickly.

Cloud scanning solutions

There are different types of security scanning solutions available, but when it comes to cloud native solution, the two main types of solutions focus on the Network (Layer 3) and Application (Layer 7) layers, as defined in the Open Systems Interconnection model. Layer 3 scans focus on the underlying infrastructure of a cloud environment, such as the network, servers, and other devices. These scans typically include checks for vulnerabilities in the operating system, software, and network configuration. They can also include checks for misconfigurations, such as open ports and weak passwords. The different cloud platforms have built in Level 3 security as (Web Application) Firewalls, DDOS and Vulnerability scanners.

Layer 7 scans, focus on the upper layers of the cloud environment, such as the web applications and APIs. These scans typically include checks for vulnerabilities in the web application code, such as SQL injection, cross-site scripting (XSS), and cross-site request forgery (CSRF). They can also include checks for misconfigurations and weak access controls.

Both layer 3 and layer 7 scans are important for identifying vulnerabilities and misconfigurations in a cloud environment, as they provide different perspectives and a comprehensive coverage.

When implemented as part of the cloud environments, these scans can help organizations to:

  • identify vulnerabilities and misconfigurations in the cloud environment;
  • prioritize vulnerabilities and misconfigurations based on their level of risk;
  • provide detailed information on vulnerabilities and misconfigurations that need to be fixed;
  • help prevent data breaches and unauthorized access to cloud resources;
  • enable incident response and forensic analysis.

These scanning solutions aim to detect common vulnerability and known breaching tactics. One of the most used awareness lists is OWASP ([OWAS22]). On a yearly basis, OWASP creates a list of the top 10 biggest security risks. OWASP’s (annual) publications could be a useful trigger for organizations to actively test their cloud environments security. One common approach for this is called Red Teaming.

Red Teaming

Red Teaming is a security testing method that simulates real-world attacks to assess an organization’s security posture and identify vulnerabilities. In the context of a Zero Trust strategy for cloud environments, Red Teaming can be used to test the effectiveness of security controls and identify any gaps in protection.

The red team simulates a variety of attacks, such as phishing, social engineering, and advanced persistent threats, to test an organization’s ability to detect and respond to cyber-attacks. The red team also evaluates the security of cloud infrastructure, including network segmentation, access controls, and incident response processes.

The goal of Red Teaming is to identify and exploit vulnerabilities before they can be used by malicious actors. By simulating real-world attacks, the red team can provide valuable insight into an organization’s security posture and help to identify areas that need improvement.

When implemented as part of a Zero Trust strategy for cloud environments, Red Teaming can help organizations to:

  • identify and prioritize vulnerabilities;
  • improve incident response capabilities;
  • validate the effectiveness of security controls;
  • enhance security awareness and employee training;
  • provide a comprehensive security testing and validation.

By conducting regular Red Teaming exercises, organizations can ensure that their cloud environments are as secure as possible, and that they are prepared to detect and respond to cyber-attacks.

While it is key (and fun!) to regularly apply Red Teaming and penetration test your cloud environment, security strategy should never overlook human system interactions. Human actors are part and parcel of every application, workload and cloud resource. Therefore, continuous focus is required on security awareness to maintain knowledge and create “muscle memory” to recognized and respond to possible security threats. The education of stakeholders in your organization can be done in many ways. Two common and proven approaches are covered in the next section.

Foxhunts and gamedays to increase security awareness

Security awareness training is an important component, as it helps to educate employees about the risks and threats they may encounter in the cloud, as well as the actions they can take to protect the organization’s assets. 82% of breaches involved a human element, including social engineering, plain human errors and misuse ([Veri22]).

One way to increase security awareness is to use gamified training methods such as foxhunt and gamedays. A foxhunt is a simulated security incident where a team of security experts, known as the red team, simulates an attack on the organization’s network. The goal of the foxhunt is to identify and exploit vulnerabilities before they can be used by malicious actors. Employees are trained to detect and respond to a simulated attack, which helps to improve their ability to detect and respond to real-world attacks.

Gamedays are similar to foxhunts, but instead of simulating a security incident, the red team simulates a business continuity event, such as a natural disaster or power outage. The goal of the gameday is to test the organization’s ability to respond to and recover from an unexpected event.

Foxhunts and gamedays can be adapted to the cloud environment and can include simulations of different types of cloud-specific attacks, such as a cloud account compromise or a misconfiguration in a cloud infrastructure.

Security awareness training with foxhunts and gamedays can help organizations to:

  • improve employee knowledge and understanding of cyber threats and risks;
  • improve incident response capabilities;
  • validate the effectiveness of security controls;
  • enhance security awareness and employee training;
  • provide a comprehensive security testing and validation.

By conducting regular foxhunt and gameday exercises, organizations can ensure that their employees are prepared to detect and respond to cyber-attacks, and that they are aware of the best practices to protect the organization’s assets in the cloud.


In conclusion, a cloud-based Zero Trust security model is a proactive approach to securing access, data and network resources that can help organizations prevent data breaches and unauthorized access. It works by continuously verifying the identity and trustworthiness of users and devices, and by enforcing security policies through automation. Zero Trust in the cloud, provides a framework to protect against threats, both inside and outside the organization, by implementing multiple layers of security and validation, and by continuously monitoring for suspicious activity. This helps organizations to reduce their attack surface, making it more difficult for malicious actors to gain access to sensitive information.


[Goog] Google (n.d.). BeyondCorp. Retrieved from:,a%20traditional%20remote%2Daccess%20VPN

[OWAS22] OWASP (2022). OWASP Top Ten. Retrieved from:

[Rose20] Rose, W, Borchert, O., Mitchell, S., & Connelly, S. (2020). Zero Trust Architecture. [NIST Special Publication 800-207]. Retrieved from:

[Veri22] Verizon (2022). Data Breach Investigations Report. Retrieved from:

Quantum computing risks and opportunities: how to become post-quantum ready

The advance of quantum computing brings new risks and opportunities that decision-makers need to consider to make their organizations “quantum ready”. Traditional cybersecurity measures such as cryptographic keys or encryption of sensitive information need to be re-evaluated to identify potential weaknesses. In addition to taking practical steps today, it is important to build the right strategy for the long term, bringing together today’s actions with tomorrow’s risk landscape.


Quantum computing is an upcoming technology that will have major implications for society and organizations of any sector and size. It brings new opportunities that can be leveraged as well as new risks that need to be managed. One area that is expected to be especially affected by quantum computers is cryptography, i.e. the encryption of information through algorithms. The recent steep development of quantum computing capabilities in lab environments is only a “warming-up” phase before the technology will hit the market at a larger scale. This means that decision makers should start taking the right steps now to prepare their organization for the future.

Organizations should future-proof their cryptographic data protection controls ([Baum22]) and strengthen their security of access control measures in both the IT and Operational Technology (OT) domain. They should also re-think how they can leverage quantum technology to improve their service offering. To provide practical advice, this article elaborates on the key trends and implications of quantum computing in the cybersecurity area of cryptography, both on the risk and opportunity side, equipping decision makers with the right context and next steps to take.

Quantum Computing significantly improves calculation speed

Quantum computers are computers based on quantum technology. They make use of the mechanics of particles at a sub-atomic level and have the potential to outperform traditional computers by miles. Although a quantum computer has outperformed a traditional (albeit very powerful) computer for the first time only recently, in 2019 ([Oliv19]), the race for ever better quantum computers is picking up fast. Tech giants such as IBM ([IBM23]), Microsoft ([Micr23]), Honeywell ([Hone23]), Google ([Goog23]) and Intel ([Inte23]) are heavily investing in quantum technology, working on ever more efficient and scalable solutions. For example, IBM’s first quantum computing system produced in 1998 had 2-qubits (quantum-bits ([IBMQ23])). In 2022, IBM introduced their 433-qubit quantum computer Osprey. For reference: the break-through in 2019 was achieved by a quantum computer with only 53 qubits ([Arut19]).

While the number of qubits in itself is not sufficient as a performance indicator ([Smit22]), the exponential growth in complexity demonstrates the potential computing power that quantum computers may hold in the future ([Feld19]). Proof of concepts from the lab give a promising outlook towards the potential capabilities of mature quantum technology. Once mature, quantum technology will significantly speed up calculations, yielding advantages in e.g. data lake analysis, modelling of industrial processes or optimization of network traffic flows. Additionally, its computing power will significantly reduce the time required to break a cryptographic key based on large number factorization – a tough problem today that will become relatively easy to crack in the future. With their advanced computing power, quantum computers will pose a threat to widely used cryptographic solutions like RSA ([MIT19]).

Addressing the risks and opportunities of Quantum Computing

Understanding how to work today to be prepared for tomorrow

With the advance­ment of the Internet of Things (IoT), i.e. internet-connected devices in both domestic and industrial settings, the world is increasingly inter­connected. This implies two things: an exponentially growing pool of data, and increasing dependencies between digital and traditional/non-digital technology. While an exponentially growing pool of data can add true value to organizations and individuals, it also increases the exposure to security risks. Sensitive personal data, business critical knowledge stored in a digital format and digital systems in general, contain a pool of information that needs to be protected. Similarly, access to digital systems that are used to manage non-digital technology (e.g. an electricity grid) needs to be controlled in a way that prevents malicious actors from abusing those systems, with potentially drastic consequences.

Traditional IT security measures have matured to a level where they can reliably act against traditional cybersecurity threats – for example by encrypting data with long and complex digital keys that would take a traditional computer many years to decrypt. A prominent capability of quantum computing is to eventually be able to decrypt – by today’s standards – securely encrypted information. Hence, a sufficiently advanced quantum computer could easily break traditional security measures (within seconds to minutes and hours – compared to the millions or even billions of years it would take a traditional computer).

Although a quantum computer may be necessary for breaking an encryption in the future, that same computer is not needed for acquiring the encrypted data today. Malicious actors can intercept a data flow (data harvesting), store it, and keep the data until quantum technology is ready to break the encryption (a problem also known as “harvest now – decrypt later”).

Data loss can happen through a variety of reasons, such as human mistakes and social engineering. Therefore, implementing security measures to prevent data loss today are an important step in becoming post-quantum secure in the future.

Identifying key risk areas

What we should consider today is what kind of data is valuable to an adversary. The data should have long-term value in order to retain its benefit for the hacker in the future when quantum computers are available. Think about data to protect confidentiality, availability or integrity for a period of time to prevent significant consequences if decrypted or otherwise compromised down the line.

For a post-quantum risk assessment, it is therefore necessary to identify types of electronic data that contain sensitive or critical information for an organization and require special protection. As a next step, security measures should be identified and implemented to prevent that the data in question is stolen or leaked – through raising awareness within the organization and ensuring that data is shared only on a need-to-know basis.

Additionally, organizations should consider the risks quantum computers pose to security, not only of traditional IT, but also internet-connected Operational Technology (OT) or the so-called Internet of Things (IoT). In case access to the control of OT is compromised, attackers may take over devices and control them for their purposes. At a small scale, this can mean intrusion of privacy through e.g. home cameras connected to the internet. On a larger scale, it can mean the disruption of critical infrastructure networks (electricity grids, nuclear facilities, water management and many more). Individuals as well as governments and businesses need to prepare for these eventualities.

Building the right strategy and mitigating (future) risks

Implementing new security measures – e.g. security awareness campaigns on how to handle sensitive data – takes effort and time. So in addition to taking practical steps today, it is important to build the right strategy for the long term, bringing together today’s actions with tomorrow’s risk landscape. Any good security strategy does not wait until an issue arises, but prepares for the future to provide long-term value. It is important to understand today’s risks (e.g. data leakage), how they relate to future risks (e.g. the decryption of leaked data through quantum computers), and what can be done today to reduce the overall risk profile in the future. The Dutch Ministry of Defense has included quantum computing in their 5-year research and technology agenda (2021-2025) to start understanding its impact on society ([MinD20]). Organizations should do the same to stay ahead of malicious actors.

Apart from traditional security actions, such as awareness measures or limitation of data access on the need-to-know principle, it is equally important to identify and consider embedding future-proof security measures. For example, there are cryptographic solutions available today that are likely to withstand even the most powerful quantum computers ([NIST22]). Migrating from current encryption standards to quantum resistant schemes should therefore be on the priority list of a post-quantum security strategy. With this in mind, it is not surprising that the US government, for example, is trying to pass laws that will mandate government agencies to use Post Quantum Cryptography (PQC) algorithms for public keys. The execution of these plans can differ in implementation strategies (e.g. hybrid use of PQC and standard encryption), and, as is the case with every cyber implementation strategy, these choices will be dependent on a trade-off between security, performance (especially reach) and costs.

The Dutch National Security Agency AIVD has recently published the Post Quantum Crypto Migration Handbook, established in collaboration with TNO and CWI and edited by various representatives from industry, including KPMG ([MBZK23]), which provides guidance on such a strategy. Building on the directions of the Handbook, we recommend a four-step approach (see Figure 1): (1) identify which security measures are currently implemented at your organization, (2) assess to what extent those are quantum ready, (3) perform a risk assessment by identifying if today’s measures are sufficient to protect what is important for your organization (crown jewels, critical data, etc.) against quantum computing threats (incl. the “harvesting” of data by malicious actors today), and (4) plan and execute the modification of security measures to post-quantum proof solutions.


Figure 1. A four-step approach to becoming post-quantum ready. [Click on the image for a larger image]


While organizations should definitely think about the risks that quantum computing poses, it is important to note that accelerated technological advancement in quantum computing also offers a lot of opportunities for business and research. For example, quantum computing as a service is already gaining traction in the form of cloud services ([Paut21]). A traditional computer sends a command to a quantum computer hosted on the cloud, where it performs the necessary computations on high speeds and sends the processed data back in “classical” binary form. These advances make quantum computing power accessible to a broader range of organizations and can speed up processes, without the need for a quantum computer.

These opportunities might be very valuable to organizations that need high computational capabilities. It is important, however, to take the envisioned commercial applicability with a grain of salt. Even though the quantum promises are that they will help businesses to solve computational problems that they cannot solve themselves with traditional computers, most use cases are highly hypothetical and experimental ([McKi21]). On top of that, there are still sizable margins of error that have to be applied to calculations made by quantum computing. Fluctuations in temperature, electromagnetic fields, or mechanical vibrations can alter the process within quantum computers and impact the reliability of their calculations ([Broo19]).

Conclusion: Provide long-term security to your organization and clients

Naturally, it will depend on the resources, nature and needs of your organization how you decide to adopt practical and viable countermeasures to threats posed by quantum computing. A risk-based, tailor-made approach to an organization’s specific use-case seems to be the most viable option at this point. Large-scale deployment of solutions such as Quantum Key Distribution (QKD) might be cost effective in the short term, but will not be proven effective if it doesn’t offer long-term and high-level solutions. Short-term risk reduction and long-term post-quantum security is the preferred two-tiered strategy to provide proper long-lasting security to your organization and its clients.


[Arut19] Arute, F. et al. (2019). Quantum supremacy using a programmable superconducting processor. Nature, 2019(574), 505–510. Retrieved from:

[Baum22] Baumgärtner, L. et al. (2022). When—and how—to prepare for post-quantum cryptography. McKinsey Digital. Retrieved from:

[Broo19] Brooks, M. (2019. Beyond quantum supremacy: the hunt for useful quantum computers. Nature. Retrieved from:

[Feld19] Feldman, S. (2019). 20 Years of Quantum Computing Growth. Statista. Retrieved from:

[Goog23] Google (2023). Explore the possibilities of quantum. Retrieved from:

[Hone23] Honeywell (2023). Honeywell Quantum Solutions. Retrieved from:

[IBM23] IBM (2023). Highlights of the IBM Quantum Summit 2022. Retrieved from:

[IBMQ23] IBM Quantum (2023). The qubit. Retrieved from:

[Inte23] Intel (2023). Quantum Computing. Retrieved from:

[MBZK23] Ministerie van Binnenlandse Zaken en Koninkrijksrelaties (2023). Het PQC-migratie handboek. Retrieved from:

[McKi21] McKinsey & Company (2021). Quantum computing: An emerging ecosystem and industry use cases. Retrieved from:

[Micr23] Microsoft (2023). Azure Quantum. Retrieved from:

[MinD20] Ministerie van Defensie (2020). Strategische Kennis- en Innovatieagenda (SKIA) 2021-2025. Retrieved from:–en-innovatieagenda-2021-2025

[MIT19] MIT (2019). How a quantum computer could break 2048-bit RSA encryption in 8 hours. MIT Technology Review. Retrieved from:

[NIST22] NIST (2022). NIST Announces First Four Quantum-Resistant Cryptographic Algorithms. Retrieved from:

[Oliv19] Oliver, W. D. (2019). Quantum computing takes flight. Nature. Retrieved from:

[Paut21] Pautasso, L. et al. (2021). The current state of quantum computing: Between hype and revolution. McKinsey Digital. Retrieved from:

[Smit22] Smith-Goodson, P. et al. (2022). IBM Announces New 400+ Qubit Quantum Processor Plus Plans For A Quantum-Centric Supercomputer. Forbes. Retrieved from:

Security of Smart Grids: a neglected issue

For a green future, energy grids need smart solutions to match demand and response. These Smart Grids offer fantastic opportunities, but their cyber security risks need adequate attention. This is not always the case. Governments, grid operators, energy producers and everyone else involved have to work even harder to make this happen. Technical security is already receiving due attention, but that’s not enough. In this article we will look at the possible solutions to reduce cyber risks and prevent damage if an incident occurs.


The energy transition is particularly demanding on the Dutch power grid. Increasing electrification requires more and more capacity and (many) more connections for solar panels, wind turbines and all kinds of other local energy initiatives. A traditional approach to this challenge – for example, by continuing to expand the grid – is becoming unaffordable, which is why Smart Grids are a good idea. These are technological solutions that help the demand for energy to evolve along with supply, limiting the additional pressure on the grid. Smart Grids make it possible to green our society at the pace we envision.

One of the important prerequisites for the successful implementation of Smart Grids is that they are also properly secured. The good news in this regard is that grid operators are often (partly) still in the design phase of these Smart Grids. This means that there will be plenty of room to create the context necessary for Smart Grids to also function securely.

Unfortunately, there is also less good news: opportunities to build a sound foundation for (cyber)secure Smart Grids are now barely used. That foundation requires identifying risks and taking effective measures, whereby we consider that things do go wrong and how we can minimize that impact. That foundation can only work if we know who the parties connected to Smart Grids are, and if it is possible to manage the joint securing of Smart Grids.

The fact that this foundation is only laid to a limited extent is partly because cyber security is not solely the responsibility of grid operators. On the user side, they only have limited control over the degree of cyber security. Think of electric cars and their charging stations. Or industrial sites with their own energy generating capacity using solar panels and wind turbines. In addition, concerns about growing interconnectivity and the potential for malicious actors to take advantage of it loom large. What needs to be done to prevent these Smart Grids from turning out to be not so smart after all?

Solutions are available but, even though these solutions are increasingly on the agenda in the energy transition, it is hugely complex to digitize an existing (physical) network in an existing ecosystem. In that complexity, cyber risks need to be taken seriously and structurally addressed. This requires political will and direction combined with collaboration in the chain between private and public parties.

In this article, we discuss the solutions for identifying cyber risks, preventing them and mitigating the impact should a cyber incident occur.

Cyber risks are a real threat

Cyber risks are not imaginary, which has become evident. The Russian attacks on Ukrainian energy networks before the war in Ukraine broke out, the cyber attacks during the war in 2022 and 2023, and the ongoing aggression toward (large) companies with ransomware are just a couple of examples. Hackers, whether or not employed by hostile states or criminal organizations, are increasingly making the digital world unsafe. When it comes to energy networks, this can have nasty consequences on several fronts. Five are described below.

  1. Blackouts. Parts of society are left without power, heat or other forms of energy. The effect can be disruptive.
  2. Physical consequential damage. Parts of the infrastructure can be destroyed or people purposefully disrupt energy supplies at vital locations, such as hospitals. This can result in deaths or injuries.
  3. Technical disruptions. More technology-oriented disruptions such as shifting grid frequencies that cause digital clocks to stop synchronizing. Older devices in particular break down as a result, which can lead to (high) costs, inefficiencies, delays and other inconveniences.
  4. Financial damage. Reduced energy availability, such as from blackouts and disruptions, can lead to higher energy prices. Households therefore have less money left over to spend on other things and may run into financial difficulties. This is not inconceivable, considering the current geopolitical situation.
  5. Privacy violations. Because Smart Grids are very fine-grained, with multiple connections into consumers’ homes, the information exchanged over those networks can be privacy sensitive. For example, whether residents are home or not can lead to damage (intrusion).

Some characteristics of Smart Grids also make this type of network even more susceptible to cyber risks than other networks. The number of contact points is increasing, as is the number of connections and the extent of interaction between all these components. All in all, the so-called attack surface is becoming a lot larger. Identifying relevant threats, and certainly the vulnerabilities of one’s own organization and infrastructure, should therefore be part of the implementation of Smart Grids.

Managing, mitigating or eliminating those vulnerabilities should be a focus point as early as the design of Smart Grids. Preventive measures will be central, complemented by the right measures to detect threats and actions that can limit the impact from attacks.

Preventive measures to reduce the likelihood of a cyber incident

In terms of prevention, the most important solution is to provide secure standards for setting up Smart Grids. Such standards are lacking now. Almost every party takes a different approach to cyber security, and this results in series of vulnerabilities. We advocate cooperation among the major market players – encouraged by the government – to develop such a secure standard and be mandatorily adopted by the entire industry.

Importantly, this involves an “ecosystem” of stakeholders in the energy chain. The standards should not only apply to the grid operators; the complexity in this playing field is too great and the security issue will not be solved by simply demanding more from the grid operators. In addition, it has become clear in recent years that the energy transition has required and will continue to require substantial financial investments from grid operators. It is time for the security burden to be borne by more shoulders.

In line with this, a second important solution to the cyber risks of Smart Grids is a greater guiding role of the government. The current European directive (the NIS Directive) for the security of critical infrastructure is implemented in the Netherlands with the Wbni (Wet beveiliging netwerk- en informatiesystemen, Security of Network and Information Systems Act). However, compliance is lacking and not adequately monitored. Many organizations view such laws and regulations from a compliance perspective: doing it because they have to and not from an intrinsic motivation. As long as the “stick” is insufficiently used, the need to do something is moderate. To date, supervision is still limited, partly because it is organized by sector and because supervisors are dealing with scarce capacity to conduct audits and supervision.

Given the importance of Smart Grids for our green future, this is an undesirable situation: tighter government direction is needed. An additional advantage is the recently issued successor to the NIS Directive: the NIS2.

The NIS Directive (NIS stands for: Network & Information Systems) is European legislation aimed at increasing the cyber security and resilience of critical systems in Europe. It is up to the European member states to translate the Directive into national legislation. In the Netherlands, the Wbni was established, with supervision assigned to the Dutch Authority for Digital Infrastructure (Rijksinspectie Digitale Infrastructuur, RDI), formerly known as the Agentschap Telecom. The NIS2 Directive is the successor to the NIS Directive and came into force in early 2023. The elaboration of NIS2 for the Netherlands will take effect later in 2023; the deadline for member states is October 2024. Organizations covered by these regulations must comply with the requirements by January 2025 at the latest. The energy sector in a broad sense has already been designated as a “critical infrastructure” under the NIS Directive. This will be further extended with NIS2 towards the underlying ecosystem.

Although the content and approach are largely similar to that of the current NIS Directive, it is to be expected that NIS2 will be less non-binding and more “prescriptive” on a technical level. Fines associated with increased enforcement, at least on paper, are not negligible. They can be as much as 10 million euros or 2 percent of global turnover. The fact that the energy sector is considered “critical” from a regulatory perspective should come as no surprise. After all, without energy supply, a lot of essential services in a country will come to an immediate halt.

It is expected that the supporting and supplying companies of the energy sector also have to deal with the NIS2 requirements to a greater extent. It will take some time to establish enforcement in the Netherlands and gather the necessary experience and capacity.

A third preventive measure to prevent attacks on (future) Smart Grids is to certify the network devices consumers use. Having a sound and uniform certification system in place will assure consumers that there is no malware in the equipment and that it cannot be exploited from the outside.

Damage control after an incident

In addition to preventive measures, the application of Smart Grids will also be more secure if governments, grid operators, energy producers, businesses and consumers take action to mitigate impact should unexpected incidents occur. This involves, first of all, building more buffer capacity, for example, with industrial batteries of significant capacity (think of a supply capacity of 100 megawatts). While one of the main advantages of using Smart Grids is that it is the very reason why there is no need to massively expand the capacity of networks, from the point of view of security it is also not recommended to remove all leeway in terms of fallback options. The presence of buffer capacity reduces the impact of a cyber incident by allowing energy supply to continue for longer.

Broader cyber risk management – which may include building buffers – should additionally assure organizations and consumers that the consequences of incidents will be contained. Cyber risk management provides structured detailing of threats and vulnerabilities, leading to a picture of relevant risks. For each risk, it is necessary to consider the measures (preventive, detective, repressive or corrective) that are effectively needed to mitigate the risk.

Knowing what can happen is a prerequisite for this, and scenario development will be used for this purpose. Next, we will need to describe the objectives. How long should a blackout last before the damage is insurmountable, which systems should be given priority for recovery, which parties will be given priority? Based on those objectives, roadmaps will be written. Exercises, preferably large-scale and based on attack simulations, should then reveal whether risk management is properly implemented.

Gaining an overview of the playing field

One of the biggest challenges in this regard is national coordination. It is already difficult to get a clear overview of all points connected to the Smart Grid, because high-volume consumers connected to the Smart Grid have a notification requirement, but private initiatives can connect to it without any notification requirement. The collection of grid operators, high-volume consumers and private individuals combined with modern Internet of Things and Smart Grid functionality on the same grid makes the security issue complex. And, for each point (organizations, companies, charging points, solar parks, et cetera), it must be clear which security measures have been taken. An additional battery at neighborhood level or additional energy production at a hospital, can all affect the risk management to be put in place at national level. The problem is that the responsibility for those local and regional measures always lies at decentralized level. In our view, grid operators should play a greater role in this. In the digitalization of grid operators’ infrastructure, for example where substations are concerned, security is already receiving the necessary attention in recent years. The complexity and lack of clarity in “governance” are greater in the area of private Smart Grids (for example, in a business park) and within developments in “smart cities,” where citizens and government must work together, than in Smart Grids solely utilized by the grid operators.

Cooperation between all parties is indispensable

Similarly, the cyber risks of Smart Grids are another reason to prevent the fragmentation of the Dutch energy supply. This is not an impossible task, but it is imperative that all stakeholders quickly choose the path of cooperation. This has to be a far-reaching and intensive collaboration, with security high on the agenda. It is more common in IT that projects seem to revolve solely around functionality, but when it comes to the construction and implementation of Smart Grids, the Netherlands cannot afford that. These smart grids are potentially a wonderful solution to one of the biggest challenges of the energy transition. However, it will prove impossible to deliver on that promise if crucial preconditions such as compliance, risk and cyber security are insufficiently addressed.

That is why now is the time, in the plan and design phase of Smart Grids, to incorporate the cyber risks of these smart grids integrally into those plans and designs. As we described above, standardization, certification and a greater guiding role for the government should make this possible, along with insight into who the connected parties to the Smart Grids are, and with structured cyber risk management. The grid operators are taking the necessary steps in this regard. The challenge now is to also take these steps in Smart Grid developments outside the grid operator’s infrastructure.

How smart is the use of smart devices in the office?

Firewalls, end-to-end encryption, private cloud, no smartphones during meetings. This is a small selection of the measures that organizations are taking to secure data traffic on their digital networks and to counter espionage. Many of these organizations also implement at first sight harmless technologies that make life easier, from smart TVs and smart lighting to cleaning robots. However, these internet-connected devices are full of sensors that collect data.

In this article, we use the case of industrial cleaning robots to show what the possible security risks are of using smart devices, and what organizations can do to safeguard their security. In doing so, we will dive deeper into the possible security risks of free-market forces in high-tech sectors with strong competition from non-European companies.


Smart devices are full of sensors that collect data. This data is often stored and/or processed in a cloud environment outside the EU and is then used to optimize the tasks of those devices. What data do these devices collect, where is this data stored and who owns this data?

In Europe, these questions are often only asked when Chinese products or companies enter the market. It is usually assumed that there are only innocent intentions behind the collection of data by Western companies, such as improving quality, as every set of terms and conditions tells us. That is a false assumption, as was demonstrated by the information that whistleblowers such as Edward Snowden and Julian Assange (WikiLeaks) made public. However, this has hardly affected the level of trust that European consumers have in Western brands. Consumers still use equipment that uses and abuses personal data. Offices are full of smart TVs, smart lighting and all kinds of other smart devices that record audio or video and possibly store it in the cloud. Data leaves the smart equipment, as well as the organizational location. It is therefore time to discuss the security risks associated with using these types of devices.

The dominant brands of smart devices are often non-European (in particular American and Asian), which limits European control over how data is collected and used. However, the EU has a fundamentally different attitude to data ownership than China and the US, for example. The EU prioritizes the protection of individual human rights over informing the central government or the economic profit of a small elite. How can the EU protect the personal data of its citizens when these citizens use Chinese and American technology?

The European economic security policy is currently focused primarily on China, where European high-tech companies with strategic technologies must be protected against takeovers or acquisitions of majority interests particularly by Chinese state-owned companies, using an investment screening mechanism. At the heart of this philosophy is the assumption that Europe has a technological lead over China and that this lead should be defended against attempts by the Chinese government to copy these technologies. The fact is, however, that European companies are no longer automatically in the top tier, so we will also have to learn how to deal with China as a supplier of high-quality technology, and in particular, how we can manage and limit the associated security risks.

The key recommendations from this article are:

  1. It is in the interest of Dutch security to support European high-tech companies with the biggest potential so that they can continue to compete with non-European brands. In this way, customers will still have a European option and more control over their data.
  2. Labelling and “benchmarking” the level of digital security of smart devices enables consumers and organizations to identify devices with higher cybersecurity prerequisites and make informed decisions.
  3. Organizations are advised to develop policies for the use of smart devices within their organization, especially when it comes to locations where valuable and vulnerable data is used.

The rise of the industrial cleaning robot in Europe

COVID-19 has accelerated our need for robots (and specifically their cleaning function) ([Lerm20]). On an increasing number of factory floors and offices, robots are taking over the physically intense tasks of cleaners so that these cleaners can focus on the more specialized cleaning tasks. Currently, all major cleaning machine suppliers are offering a robotic vehicle, or at least developing one. There are currently nine suppliers of industrial cleaning robots in the Netherlands. Table 1 shows that most cleaning robots on the Dutch market are non-European brands.


Table 1. Supply of industrial cleaning robots on the Dutch market.
Source: Compiled from interviews with suppliers of industrial cleaning devices in the Netherlands [Lugt21]. [Click on the image for a larger image]

It is noticeable that half of the cleaning robots on the Dutch market use BrainOS software from the American company Brain Corp. Brain Corp develops software for robots and manages the data that the robots collect. The vice-president of Brain Corp recently announced that his company would like to do more with this data in the future:

Multifunction robots that can clean and scan at the same time will come eventually as an IoT source of information that’s considered valuable. […] Right now, the industry records everything but doesn’t do anything with the data. We’re very judicious about data.” Phil Duffy, vice-president of innovation at Brain Corp ([Dema20]).

The fact that an American company is about to collect, store and analyze data from cleaning robots in the Netherlands on such a large scale should make us think about where we would like to use those cleaning robots. If a company has a “no-smartphones policy” during meetings, it would be contrary policy if that same company had a cleaning robot driving through the office and/or factory halls while data is collected and stored by a non-European company. What data does an autonomous cleaning robot collect and what possible security risks are associated with the use of such a cleaning robot?

Cleaning robots and security

Historically, technology has always had two faces. On the one hand, there is the perspective of progress and innovation. On the other hand, new technologies can shift vulnerabilities that can disrupt the “stable, comfortable equilibrium/normal”.

In recent years, there has been an increasing focus on the cybersecurity aspects of industrial and consumer robots, but cleaning robots have remained below the radar. However, these machines also collect data while performing their work at airports, universities, companies and government buildings. To function optimally, they are equipped with cameras and/or other sensors that collect data.

We generally distinguish two basic types of robotic sensors:

  • Proprioceptive sensors, which collect data about the robot itself.
  • Battery status, maintenance status, etc.
  • Exteroceptive sensors, which collect data about the workspace of robots.
  • Lasers, distance sensors, cameras, etc.

In this article, we focus on the exteroceptive sensors.


Figure 1. Examples of exteroceptive sensors.
Source: Fybots and Gaussian Robotics sales brochures. [Click on the image for a larger image]

With the help of these sensors, the robot can create a ground floor map, locate itself, avoid objects and stairwells, recognize glass walls and communicate with elevators.1 Depending on the type of camera in the robot, it could make detailed recordings of its surroundings and the people walking around. This could be sensitive, personal and/or secret information. Some manufacturers therefore consciously choose not to place cameras on the robot and instead only use lasers (LiDAR sensors) and ultrasonic sensors. Brain Corp combines 2D LiDAR with cameras.2 It claims that its cameras blur faces and texts during recording and that, as a result, those images are only stored and transmitted to Brain Corp in a blurred manner (interview with representative of Brain Corp). Brain Corp then converts this data into relevant data for its customers, which they can access through a portal. For example, the customer will receive a photo of what is in the way of the robot when it gets stuck. If this is a person, his or her face will be blurred. The customer trusts the manufacturer not to store potentially sensitive information.

As Gaussian formulates it on its website, “The best hardware is only as good as its brains” ([Gaus22]).The data that the cleaning robot collects only becomes meaningful when it is converted into information by drawing relational connections ([Rowl07]). To convert data into information, an object must be detected, identified and/or classified. Algorithms that run these processes are usually demanding in terms of computing power. For reasons of computing power, battery conservation or even cost reduction, these algorithms are often processed not in the robot but in the cloud. In the case of cleaning robots, the robots communicate generic “cleaning data” (such as images and floor plans) with the “parent company/cloud”.

Data leaves the robot, as well as the cleaning location, to be converted into information. Moving and storing this information can entail security risks if the data contains privacy-sensitive or classified information. The manufacturers use different technologies to transmit the data collected by the robots, such as mobile connections (3G/4G/5G) and WiFi point-to-point.

Connectivity subjects the cleaning robots (and similar IoT devices) to Beckstrom’s Law of Cybersecurity;

  1. Anything connected to a network can be hacked.
  2. Everything is connected to a network.
  3. Because of this, everything can (potentially) be hacked.

A cleaning robot (with cameras and WiFi) that moves freely through the building is therefore potentially an ideal target for hackers. There are roughly three types of threat actors that we could distinguish ([Dams19]):

  • Script kiddies: Most hackers are referred to as “script kiddies” (i.e. inexperienced, usually young adolescent individuals or journalists looking for a juicy story and executing attacks (scripts) they found online without deep technical knowledge). The impact of their cyberattacks is, however, not to be underestimated. The success of the Mirai DDoS botnet attack shows the damage that this group can cause.
  • Cybercriminal gangs: These gangs are mostly after money and have made a lucrative business model out of their activities.
  • Nation-state actors: The revelations by whistleblower Edward Snowden showed that nation-state actors also hack digital devices to spy3 (even on allies).

IoT security and risks

The rise of the internet of things (IoT) leads to discussions about the security of devices connected to the internet. When an organization’s devices, from production equipment to the air conditioning system and printing machine, send data over the internet, it creates new access points (and risks) for the corporate network ([Hods19]). However, many of these devices are designed and developed with limited security controls. In product development, higher security requirements often go hand in hand with higher costs and power consumption. This vulnerability is exploited by hackers who develop special malware for IoT equipment. For example, the Mirai botnet virus took down large and popular websites through massive Distributed Denial-of-Service (DdoS) attacks using hundreds of thousands of compromised IoT devices ([Burs17]). The compromised IoT devices ranged from printers and (security) cameras to baby phones.

It is therefore important to think carefully about who (and which devices) can have access to internal facilities to limit and mitigate the security risks. This risk applies to European as well as non-European robots. However, when a robot runs on non-European software, the security of the data is much more complex.

In this digital age, data is the new gold, but it is not always treated that way. After all, there has been no commotion about the fact that most cleaning robots on the Dutch market run on American software. Who owns the data that these machines collect (and does it include sensitive personal data)? The raw data that industrial cleaning robots collect is often sent directly to the manufacturers. Does this automatically make them the owner of the data? In light of the General Data Protection Regulation (GDPR), how is this potentially personal data that the cleaning robots collect treated? The findings from our survey among manufacturers and suppliers of industrial cleaning robots in the Netherlands show that people hardly ever associate privacy (and GDPR requirements) with robots. Customers sometimes ask for it, but suppliers do not yet have adequate answers.

European brands will soon be joined by a strong competitor from China: Gaussian

There are now at least two European initiatives that run on their own software: Adlatus and Cleanfix. How advanced are these European-produced robots compared to their American and Asian competitors? How advanced is the new Chinese robot that is about to compete for European market share? Will we still have a European competitive alternative in the future?

Table 2 shows how these European providers compare to the other providers of cleaning robots on the Dutch market.


Table 2. Specifications and functionalities of cleaning robots.
Source: Interviews with suppliers of the various cleaning robots in the Netherlands [Lugt21]. [Click on the image for a larger image]

Gaussian has announced that it will market four more models of autonomous cleaning robots in 2021. With six different models, Gaussian will soon have the largest range of industrial cleaning robots (see Table 1). Moreover, Gaussian has a competitive advantage over almost all of its American and European competitors in the technological field4. For example, Gaussian and Fybots robots are the only industrial cleaning robots on the Dutch market that can start at any point in the room (without reprogramming a new starting point) and communicate with elevators. In addition, like only three other robots, Gaussian robots can re-route automatically if an obstacle blocks their passage.

As we mentioned above, half of the industrial cleaning robots on the Dutch market use the Brain Corp software BrainOS. BrainOS software is based on a “teach-and-repeat” technology. This technology is less advanced than the fill-in function or the dynamic path planner technology (see textbox). It is intended that the robots with BrainOS software will also start using the fill-in function during the first half of 2021. The European brands Adlatus and Cleanfix already use of this fill-in technology. The Gaussian robots, on the other hand, make their own map and determine their route automatically with the aid of Simultaneous Localisation And Mapping (SLAM) ([USP10]).

The teach-and-repeat technology means that you will need to guide the robot throughout the whole area that needs to be cleaned, after which the robot will follow this pre-programmed route. The fill-in function means that an operator needs to guide the robot along the outer lines of the area that needs to be cleaned, after which the robot will clean the area within these lines automatically. In this case, the robot will have to start at a pre-programmed starting point. Both technologies use SLAM, but to a lesser extent than with the dynamic path planner technology. SLAM is based on the multidimensional normal distribution (a derivative of the Gaussian distribution discovered by Carl Friedrich Gauss), hence the name Gaussian. The other brands also use SLAM, albeit to a lesser extent than Gaussian does, and that is the reason why these robots cannot determine their routes as independently.


Figure 2. Example of mobile robot position technology: visualization of SLAM.
Source: [AGVb20] [Click on the image for a larger image]

Adlatus and Cleanfix are more advanced than the robots using BrainOS software, but they are no match for the Gaussian robots5. Adlatus Robotics was founded in 2015 as a start-up in Ulm, Germany ([Ruts18]). Still a relatively small company with about 40 employees, Adlatus currently has one model of industrial cleaning robot on the market that won the PURUS Innovation Award in 2017 (Adlatus, 2017) and 2019 ([Adla19]). Cleanfix is a Swiss company with approximately 180 employees that has been developing and manufacturing cleaning machines since 1977.

European models can expect fierce competition from China’s Gaussian Robotics. Founded in 2013, Gaussian produces high-quality industrial cleaning robots. The founders are Cheng Haotian (University of Cambridge electrical engineering alumnus) and Qin Baoxing (the founder of a Singaporean autonomous driving company). Gaussian Robotics employs approximately 450 people, of whom approximately 250 are engineers. Gaussian Robotics is the market leader in intelligent cleaning robots in China with a market share of more than 90% ([Zhao20]). Gaussian exports about 40% of its products and is also the market leader in the rest of Asia. Gaussian (or Gao Xian in Chinese, which loosely translates as “height” or “high God”) has recently also started focusing on the European market.

In September 2020, Gaussian Robotics raised $22 million in investment, with the largest investors being the Chinese Broad Vision Funds and China Capital Management ([CMAI20]). The interest of these two major Chinese funds shows confidence in the company and its growth potential. Last October the Dutchman Peter Kwestro (former Global Sales Leader of Adlatus) was appointed as Global Business Development Director of Gaussian ([Scho20]).

In short, there are currently many developments going on in this sector. The developments indicate that European manufacturers of industrial cleaning robots will face stiff competition, at least on paper, from China’s Gaussian.


Will Chinese cleaning robots come to us to absorb our knowledge and secret information? We know one thing for sure: they are not coming for our knowledge about cleaning robots.

If customers and organizations choose advanced non-European cleaning robots, the market share of European organizations will decline and with it the appetite to seriously invest in R&D. That is particularly true if Gaussian also starts a price war. This could lead to a situation in which Gaussian’s competitors throw in the towel one by one and buyers of industrial cleaning robots might be left with hardly any choice6. These are market forces and will not necessarily be a problem for the use of industrial cleaning robots in generic locations such as shopping centers and distribution centers. However, there must be a (high-quality) European alternative for locations where (secret) valuable and vulnerable data is used and stored, such as universities, high-tech organizations and government buildings (including defense buildings). In these locations in particular, careful consideration must be given to the use of smart devices such as cleaning robots that store floor plans and possibly also record, store and/or process detailed images of people and texts in a (non-European) cloud.

Therefore, action must be taken now that European alternatives still exist. It is in the interest of European security to support European cleaning robot manufacturers with the greatest potential so that they can continue to compete with non-European brands, making sure there is still an attractive and competitive European alternative. We need to ensure that European manufacturers of cleaning robots (that have far fewer robotics engineers than Gaussian) can keep up with non-European brands so that we will always have a European alternative.

Organizations also need to be smarter about the use of smart equipment, including cleaning robots. Smart devices offer many benefits and can make work significantly easier, but they must be used sensibly, especially when it comes to locations where valuable and vulnerable data is used. Awareness of the security risks associated with the use of smart devices is fundamental to increasing digital resilience.

A possible solution is labelling and “benchmarking” the level of digital security of smart devices. For example, the Cyber Security Agency of Singapore (CSA) has launched the Cybersecurity Labeling Scheme (CLS) for smart consumer devices as part of its efforts to improve the security of smart devices ([CSAS22])7. This enables consumers and organizations to identify devices with higher cybersecurity prerequisites and make informed decisions. Manufacturers can differentiate themselves from their competitors and be encouraged to develop safer products. Organizations are also advised to develop policies for the use of smart devices within their organization. Since it is a standard procedure for organizations to perform background research and screening of (support) staff, it is only logical to maintain the same policy for smart devices used within the organization.

The report “How smart is the use of smart devices in the office?” ([Lugt21])was published in the Clingendael Spectator magazine. The Clingendael Spectator (since 1947) is the magazine of Dutch think tank Clingendael, freely accessible for all with an interest in current developments concerning world politics. The report was written in collaboration with Dr. Sanne van der Lugt, who at the time of writing the report was a China researcher, intercultural trainer and senior policy advisor on foreign policy in the Dutch parliament. Her current research focus is on AI developments in China.


  1. Other examples of devices with exteroceptive sensors are, for example, cameras in smart TVs and microphones in their remote controls. In this article, we focus on sensors in industrial cleaning robots to go deeper into the possible impact of the use of smart devices in the office and other places in the organisation.
  2. In the literature this is referred to as a cost-effective alternative to 3D LiDAR. A 3D LiDAR can easily cost between $4,000 and $15,000, while a 2D LiDAR only costs $800 to $2,500.
  3. Secret services sometimes intercept devices while they are on their way to their customer and then add eavesdropping equipment or disable the product. For example, the NSA has routinely intercepted shiploads of Cisco routers to install backdoors.
  4. See Table 2: Specifications and functionalities of cleaning robots.
  5. We received the information about the Fybots Sweep XL late in our research and were not able to include it in our analysis. For reasons of completeness, we have included the information in the table to show that there is another promising European brand.
  6. In a similar way, Chinese telecom vendors Huawei and ZTE have pushed Western vendors out of the market with a price war and that is the reason why there are globally only four telecom vendors left: two Chinese (Huawei and ZTE) and two European (Ericsson and Nokia Alcatel-Lucent). Ericsson and Nokia have barely survived the price war with Huawei and are currently benefiting from the trade war between the US and China.
  7. Various countries (e.g., USA, Finland, and Germany) are designing and adopting Smart Devices Cybersecurity Labelling Scheme (CLS) that improves safety for consumers. The labels are expected to feature ratings that reflect the quantity of data collected, how easily the device can be patched or upgraded to mitigate vulnerabilities, data encryption, and interoperability.


[Adla17] Adlatus (2017). CMS Purus Innovation Award 2017 – Pressebericht CMS Messe Berlin. Retrieved from:

[Adla19] Adlatus (2019). Adlatus Wins The Purus Innovation Award. Retrieved from:

[AGVb20] AGVblog (2020). Mobile robot positioning technology – laser SLAM. Retrieved from:

[Burs17] Bursztein, E. (2017). Inside the infamous Mirai IoT Botnet: A Retrospective Analysis. Retrieved from:

[CMAI20] China Money AI (2020, September 2). Chinese Clearning Robot Developer Gaussian Robotics Raises 22M in Series B Round. Retrieved from:

[CSAS22] CSA Singapore (2022). Cybersecurity Labelling Scheme (CLS). Retrieved from:

[Dams19] Dams, T. & Verbij, R. (2019). Gaming the new security nexus. Netherlands: KPMG, Clingendael.

[Dema20] Demaitre, E. (2020, June 8). Cleaning robots, ease of use, and data key to reopening retail, say Brain Corp execs. Retrieved from The Robot Report:

[Gaus22] Gaussian Robotics (2022). ECOBOT Scrubber 75. Retrieved from

[Hods18] Hodson, C. (2018, August 7). De security-risico’s van IoT. Retrieved from Computable:

[Lerm20] Lerman, R. (2020, September 8). Robot cleaners are coming, this time to wipe up your coronavirus germs. Retrieved from The Washington Post:

[Lugt21] Van der Lugt, S. & Bel, M. (2021, April 12). How smart is the use of smart devices in the office? Retrieved from:

[Rowl07] Rowley, J. (2007). The wisdom hierarchy: representations of the DIKW hierarchy. Journal of Information Science, 33(2).

[Ruts18] Rutsch, S. (2018, April 11). Adlatus Robotics wins Germany’s first cleaning robot contest. Retrieved from

[Scho20] Schoonmaak Journaal (2020, October 2). Aziatische Gaussian Robotics zet met next level robots stevige voet aan Europese wal. Retrieved from:

[USP10] United States Patent (2010, November 9). Retrieved from:

[Zhao20] Zhao, L. (2020, September 3). Chinese Cleaning Robot Maker Gaussian Robotics Raises $22M In Series B+ Round. Retrieved from Pandaily:

How disinformation might hurt your business

Disinformation is a cybersecurity threat and should therefore be treated as such. Fortunately, a lot of cybersecurity measures can be adapted to protect your organization from disinformation. This means that we do not need to completely overthrow our already existing cybersecurity strategies. They just need a little tweaking.


Malware, ransomware, Distributed Denial of Service (DDoS), and social engineering are all familiar threats that we associate with cybersecurity. To counter these threats, the cybersecurity industry has come up with many different risk mitigation strategies, frameworks, and tools to protect organizations and businesses.

The bad news is that there is a new cyber threat in town that definitely needs our attention: disinformation. Information being used to deceive the public is not a new concept. However, we can all agree that in this digital age with powerful social media channels, the issue has become exponentially more present in our daily lives.

The good news is that there is no need to completely overthrow all our existing strategies, frameworks, and tools to protect our organization from disinformation attacks. In fact, most of the tools and mitigation measures we use in our organizations to counter the impact of other cyberattacks can also be used to counter the impact of disinformation. They just need a little tweaking.

In this article, we will first define disinformation, then we will argue why disinformation should be considered a cyberthreat. Finally, a methodology of preventative measures is proposed to assist your organization in mitigating the effects of disinformation.

Terminology of disinformation

Before discussing the merits of tweaking your existing cybersecurity strategies and framework regarding this new threat, we first need to establish how to distinguish between the terms misinformation, disinformation, and malinformation. These three concepts together are abbreviated as MDM (Misinformation, Disinformation, Malinformation). In our definition of disinformation, the word “intentional” is key. Whereas misinformation is false information that is spread unintentionally, disinfor­mation is defined as the intentional spreading of mislead­ing or false information with the specific intent to harm or manipulate individuals, groups, or organizations. Finally, we understand malinformation as information that stems from the truth, but is deliberately exaggerated in such a way that it is misleading, and causes harm to a person or an organization. Figure 1 shows the correlation between the different terms.


Figure 1. Mapping of different information terminologies. [Click on the image for a larger image]

Disinformation is therefore defined as the intentional spreading of misleading or false information with the specific intent to harm or manipulate individuals, groups, or organizations. However, the process of labeling whether or not a certain piece of information is false or mis-used (misinformation) is mostly irrelevant from a cybersecurity perspective. More relevant in this respect is whether your organization is able to withstand any kind of (targeted) information with the (in)direct aim to hurt your business. In doing so, we are keeping away from the ethical and political debate about “freedom of speech versus intentionally spreading false information”, and instead focus on how to protect our busi­nesses against this new cyber threat.

Disinformation as a cyber threat

A key distinction between a disinformation attack and a “traditional” cyberattack, is the target at which the attack is directed. In most cyberattack methods, attackers use humans to subvert technical systems. In disinformation attacks, technologies are used to subvert human thinking. In fact, it can also be seen as social engineering on a large and automated scale. Some researchers have invented a new word for this: cognitive hacking ([Jiam21]).

Cognitive hacking is a way of using computer or information systems (social media) to change the perceptions and corresponding behaviors of human users ([Bone18]). The attack is aimed at our subconsciousness. In that sense, the goal is the same as with social engineering attacks. There are two major differences, however. Firstly, these cognitive attacks are mostly long-term investments, in that they cause damage that cannot easily be revoked since they are aimed to manipulate the human psyche. Secondly, the tools that are being used are, in part, different.

Threat actors usually deploy three different disinformation tools:

  1. social media posts (in some cases pushed by troll farms);
  2. fake news sites;
  3. deepfakes.

Cognitive hacking is the overarching attack framework in which spreading of disinformation is used as an attack vector.

Bruce Schneier, a fellow and lecturer at Harvard’s Kennedy School, wrote two decades ago: “Only amateurs attack machines, professionals target people” ([Schn13]). Phishing and spear phishing are much more effective when a form of cognitive hacking is deployed. Cognitive hacking and other cyberattack methods become more powerful when deployed together. For example, an attacker deploys malware in your IT systems to exfiltrate data and then uses that data to instigate an information campaign against your company with the purpose of extortion or inflicting damage on your brand’s reputation and stock value. This exfiltration of data for the purpose of using it to feed a disinformation campaign is especially dangerous for organizations that store substantial amounts of sensitive data (i.e. governmental organizations, health organizations, social media organizations).

One example of using traditional hacking methods to “feed” a disinformation campaign is the cyberattack on the European Medicines Agency (EMA) of December 2020. The systems of the agency were hacked and confidential internal correspondence on the evaluation of the BioNTech/Pfizer vaccine was unlawfully obtained. Later these documents were published online. However, EMA marked that “Some of the correspondence has been manipulated by the perpetrators prior to publication in a way which could undermine trust in vaccines” ([EMA21]). This shows that traditional cybersecurity attack methods, as in this case hacking, can be used to fuel disinformation campaigns with the aim of carving out trust in public health authorities, such as EMA.

The attack chain might also go the other way around: instead of using traditional cyberattacks to exfiltrate data for the purpose of creating disinformation (i.e. spreading disinformation is the end goal), an attacker might use disinformation for a cybersecurity attack (disinformation is used to achieve the goal). For example, an attacker could use false information or false identities (deepfakes) to pretend to be the CEO or CFO, requesting their trusted employees to pay certain invoices that look plausible but actually directly benefit the attacker. CEO/CFO fraud is widely known by now, and different start-up technologies have been developed that can be used to detect deepfakes. That is a good start, but it is also a form of symptom control that only works if organizations approach this risk of disinformation from an overarching information risk management strategy.

An example where disinformation (deepfakes) is being used in CEO/CFO fraud involves a UK energy firm ([Stup19]). The UK CEO was called, thought that he was speaking to his boss, the chief executive of the parent company. The imitator asked the UK CEO to urgently wire €220,000 to a Hungarian supplier. Afterwards, the company found out that a very realistic deepfake voice (including slight German accent and speech cadence) recording of the chief executive was used by hackers to spoof the CEO. This form of cognitive hacking is not reserved for the most technical criminals, there are many tools online that can convert speech snippets to a workable and believable voice impersonation.

How to protect yourself and your organization against cognitive hacking

Although new laws are taking shape ([EC22]), these will not stop perpetrators from using subversive tools for their personal and/or financial gain. That is in part because, like most of the cyberattack methods, accountability remains a mayor issue, in part due to digital anonymity. This enables users to post and spread disinformation anonymously and it is just as easy to create fake artificial accounts that are able to redirect the disinformation to a much larger audience.

Here is where the previous mentioned tweaking methods come in. In a practical sense, there are some preventive steps that you can take, in the following order:

  1. Your organization needs an expansion of your existing information security strategy to include the risk arising from disinformation.
  2. Your organization needs to assess the risks and think of scenarios that involve disinformation attacks. These thought exercises can help you understand how your company might be harmed and how prepared your people and processes are to counter these incidents.
  3. Your organization needs to determine which mitigating measures are suitable to your organization to counter such risks scenarios, taking into account the nature of your organization’s risk profile. You can think of:
    1. creating an incident response plan that sets out how to respond to an MDM attack;
    2. organizing crisis simulations that simulate a large-scale MDM attack;
    3. equipping the PR team and/or your SOC with MDM detection and authentication tools;
    4. training your staff on how to recognize MDM and making them aware of the different ways MDM is deployed.
  4. Measure the maturity of your organization to withstand MDM incidents.


All of the steps above probably sound very familiar to cyber­security professionals. That is exactly why we believe that adapting our existing cyber risk management procedures to this emerging threat is achievable for every organization. It all starts with recognizing disinformation as a cyberattack, and not just an elusive political attack deployed by nation-states targeted only at political systems. Cognitive hacking has become a mainstream hacking method and, as such, can be addressed in much the same way as other hacking methods. Instead of trying to reinvent the wheel by implementing specific “disinformation measures”, use the structures we already have in place to adapt and protect.


[Bone18] Bone, J. (2018, March 5). Cognitive Hack: Trust, Deception and Blind Spots. Corporate Compliance Insights. Retrieved from:

[EC22] European Commission (2022, November 16). Digital Services Act: EU’s landmark rules for online platforms enter into force. Retrieved from:

[EMA21] European Medicines Agency (2021, January 1). Cyberattack on EMA – update 5. Retrieved from:

[Jiam21] Jiaman, A. (2021, January 30). Disinformation Is a Cybersecurity Threat. Medium. Retrieved from:

[Schn13] Schneier, B. (2013, March 1). Phishing Has Gotten Very Good. Schneier On Security. Retrieved from:

[Stup19] Stupp, C. (2019, August 30). Fraudsters Used AI to Mimic CEO’s Voice in Unusual Cybercrime Case. The Wall Street Journal. Retrieved from:

Privacy challenges of expanding your business to and within the EU

Scale-ups and start-ups face all kinds of different challenges while rapidly changing and expanding, such as dealing with growing workforce, ad-hoc processes, cash flow control, growing customer needs, and let us not forget compliance. This article focuses on how to approach setting up a privacy compliance program with limited financial and human resources and attempts to explore practical, efficient and scalable solutions on how to approach challenges.

Disclaimer: this article cannot and shall not be regarded as legal advice. The content therein shall be regarded purely as the personal and professional opinion of the contributors.


After more than 4 years of GDPR enforcement, complying with the GDPR and the related privacy standards still remains a challenge. That holds true both for companies established within the EU and maybe even more so for those that want to expand to the EU. Developing a business in the current social and economic landscape is certainly not an easy journey. Ensuring a privacy-compliant business only adds to the task. At the same time, the world is becoming more digitalized, with a considerable increase in development and usage of digital technologies having data at their core. This renders privacy even more important than ever. If bigger companies are more or less equipped with the necessary tools and FTE capacity and already way ahead in the process of GDPR compliance, for start-ups and scale-ups the GDPR compliance challenge is only at its rise.

This article focuses on exploring the GDPR compliance adventure from a start-up/scale-up perspective. Why? Because scale-ups and start-ups encounter all kinds of different challenges while rapidly changing and expanding, such as dealing with growing workforce, ad-hoc processes, cash flow control, growing customer needs, and let us not forget compliance. Whether your company is a scale-up or start-up in the EU with the ambition to expand within the EU or a scale-up or start-up for example in the US with the ambition to expand to the EU, this article emphasizes GDPR compliance challenges for both. Furthermore, it attempts to explore practical, efficient and scalable solutions on how to approach the listed challenges.

Scale-up and start-ups have to be very creative when it comes to efficiently and effectively implementing a compliance program, they are often driven or distracted by everyday humdrum and lack resources, knowledge and cashflow. A comprehensive privacy compliance program covers every data-privacy related requirement and provides your organization with a framework for processing, collecting, storing, sharing, retaining and deleting personal data. In this context, executing the entire compliance program and being fully compliant seems like quite the ambition. In reality, start-ups and scale-ups are very dynamic and fast growing, standardized processes are usually not implemented and happen with a rather ad-hoc approach. Furthermore, privacy projects are rarely on the top of the list of priorities when it comes to budgeting. At the same time, customers are becoming increasingly aware of their privacy and rights. Therefore, providing a great product or service to customers also means ensuring adequate levels of personal data protection. Besides, a GDPR fine can be devastating for already financially struggling starts-up and even scale-ups. So not complying with privacy laws is simply not an option.

The purpose of this practical article is to explore the main challenges for scale-ups/start-ups to achieve and maintain GDPR compliance so that the company can grow and expand rapidly. The identified challenges will be explored from the perspective of an EU-established company as well as a US-established company, both with the aim of expanding within the EU. This article is structured as follows:

  • Everything starts with awareness
  • Make sure you’ve got your priorities right
  • Is your management on board?
  • Key considerations for business expansion to different countries within the EU

Everything starts with privacy awareness

So, where to start? A scalable and solid compliance program starts with awareness. The data privacy landscape is very complex, and it continues to evolve, with a lot of new legislative proposals from the EU just around the corner ([EC]). In this context, it is critical that your employees know how to handle personal data in a privacy responsible and compliant manner. Privacy is a team sport, it cannot be achieved by your Privacy Officer or team alone. Aside from the fact that training your staff is an obligation imposed by the GDPR, the overall success of your company’s privacy compliance is directly related to how much your employees care and understand privacy, why it is so important and how to address regulatory requirement internally. You cannot engage your employees and ask them to follow and implement legally required privacy standards if they do not understand or know how to determine personal data and how the GDPR affects this data. Therefore, for a strong foundation of your future compliance framework, focusing on awareness is the key. Although we now focus on training and awareness specifically related to privacy, it does not have to be addressed in a vacuum. Privacy trainings can be combined with other topics, such as security, integrity, or compliance.

Of course, awareness should be implemented with the GDPR basics (‘Privacy 101’). Although GDPR has been around for a while now, do not assume that everyone knows what it entails. Therefore, first focus on creating a generic privacy training for all employees or leveraging existing training material. Focus on the topics that are relevant for all employees regardless of whether they handle personal data. Think of data breaches and how to recognize and report (internally), what does personal data and its processing entail and why it is important to comply with the GDPR. And don’t forget to address the key GDPR privacy principles, such as transparency and data minimization. The purpose of the training is to create a shift in their mindset to critically think about why personal data is needed before processing personal data. GDPR is already perceived as a heavy topic, so try to make it fun and relevant.

Basics are good, but that is not enough. Therefore, as a next step you could focus on specific training for employees that are handling personal data or have responsibilities relating to processing personal data. Depending on your company, the focus could be on Sales, Marketing, Procurement, IT and Human Resources.

In this training, move a step ahead from the basics. Get your colleagues’ attention by sharing practices with them that are relevant for their daily activities. Also, share with them concrete information that is required for the specific teams to understand their responsibilities. For example, Human Resources should be made aware how to handle (former) employee data and data from applicants, and Procurement should be informed about the ins and outs of data processing agreements. And again, make it pleasant and convenient, for example, by inserting game elements in your training or making it interactive by including statements to start a discussion.

Besides traditional training, you need to find ways to make sure privacy is ‘top of mind’ within the company. You can do that by organizing various other initiatives. Send a newsletter when there is a breach that might be relevant to your company.

Awareness is not only about the once-a-year training, is it a constant and continuous effort that can have severe consequences if missed.

Make sure you set your priorities

Next to awareness, it is important to set your priorities. Considering that limited FTE capacity is working on GDPR compliance, it is necessary to first think about your GDPR compliance strategy and determine the organization’s risk profile and appetite. This will help you set the right priorities. Your risk profile and appetite may vary, for example your company may operate business-to-business or business-to-customer or your company may need to process more sensitive personal data.

Setting your priorities will not work without first understanding your data and risks. For that, you need to gather all relevant information, such as business plans or blueprints, and stakeholders to determine what the data will be used for and to determine the GDPR strategy and risk tolerance given the regulatory landscape. When assessing the risk appetite, take into account what your company likes to achieve with personal data and to communicate this externally, what role does personal data play in achieving the goals and milestones and the risk that your company is willing to take to achieve their goals effectively and efficiently. This can be used as input for the approach or GDPR program to implement.

Implementing a pragmatic approach will help to structure privacy tasks and activities and help prioritize compliance tasks and give some guidance. For example, you can adopt a generic framework, such as NOREA or NIST, and use that to make sure all your privacy tasks and activities are covered and tailor it to suit your company’s needs. To tailor it, use the input from the strategy and risk profile and appetite taking into account the company’s need and the legal environment.

To help determine your risk appetite and profile, the first step is to gain an understanding of the processing activities by holding a data inventory or by registering processing activities.. This might look like a boring and redundant task for your manager, but it is a crucial step in establishing a strong relationship with your company. The register of processing activities contains details about what personal data the company holds, for what purposes, from which data subjects (customers, employees, business partners, etc.) and more. The register should be centered around processes, divided between the different departments or business activities within your company rather than based on individual IT systems alone. In line with your company’s business goals and growth ambition, you are able to assess the risks that arise from processing personal data within your company and what activities to focus on.

To make it even more concrete. If you are a business-to-business company, you would probably focus more on employee data, since your company does not hold (or hardly) any customer data. If your company is a tech innovation start-up working with AI or other new technologies, you should focus more on the personal data processed by those technologies.

Now that you have set your priorities, you need to start executing. However, privacy projects are rarely on the top 3 items of the budgeting list within a start-up/scale-up, so one of the key success factors is the buy-in of your management, which will be addressed in the next section.

Is your management on board?

When defining your privacy priorities, make sure you include engaging your management into the company’s privacy compliance journey as a priority to get broader and company-wide buy-in. That is usually a challenging task, as privacy is not a revenue generating source. So how can you bring your management on board? First, it is good to emphasize the fines that the GDPR imposes for non-compliance. Simply put; privacy does not bring your company any immediate financial benefits nor revenues, but it can clearly safeguard your company’s budgets which can be severely impacted by a privacy fine. A fine imposed for GDPR non-compliance can have a tremendous impact on the whole existence and development of your company. Furthermore, you clearly do not want to have your newly established company in the news, under a GDPR fine headline.

Secondly, studies are showing that customers are becoming increasingly aware of their personal data. So being privacy compliant only adds value to your marketing strategy and can clearly provide a competitive advantage. Data privacy and security issues aren’t just a matter of checking the compliance box; they have become strategic competitive advantages that can raise the bar on brand trust with consumers and employees.

Key considerations for business expansion to different countries within the EU

Although the GDPR is intended to harmonize privacy laws and enforcement in Europe, you have to take into account the differences in privacy regulations between European Member States when the company would like to expand. Most Member States have introduced their own national legislation based on certain exceptions provided by the GDPR, for instance when it comes to national identifiers and the age of minors. Next to that, not only privacy laws are relevant, but also other national laws and (industry sector) regulations. For example, national employment laws could have an impact on the way personal data of your employees that is processed.

Secondly, besides making sure you adapt your practices to different national laws, you also need to make sure you keep up with all the legislative developments at the EU level, such as the enactment of the EU Data Act, the Data Governance Act and upcoming legislation such as the ePrivacy Regulation, the Digital Market Act, Digital Services Act and the AI Act. In an attempt to address all the challenges imposed by the new digital interactions between customers and businesses, EU institutions are working hard on adopting more robust standards for data protection practices, as well as increasing accountability from businesses engaging in such practices.

For example, before expanding to the EU, you need to think about the location of your headquarters. If you are a US-based company willing to expand to Europe, you will encounter differences in the implementation of the GDPR; the one that stands out the most is probably the enforcement by Member States’ Data Protection Authorities. Every Member has its own Data Protection Authority governing the implementation of the GDPR. In this regard, when your company considers expanding within the EU, you will likely stumble upon deviations in regulations when it comes to privacy. As mentioned, this has to do with the freedom that the GDPR leaves for national choices, as well as national law that can exist alongside the GDPR and, for example, has stricter requirements regarding the processing of personal data of employees or the processing of the social security number. The aforementioned challenges show that your company’s business expansion strategy must also be determined in accordance with applicable privacy requirements.


A start-up or scale-up is characterized by a rapidly changing and expanding environment, often facing many challenges. Given that GDPR is not always a top priority, scale-ups and start-ups have to be very creative when it comes to efficiently and effectively implementing a compliance program, they are often driven or distracted by everyday humdrum and lack resources, knowledge and cashflow. A pragmatic privacy program, setting the right priorities, ensuring buy-in and having privacy aware employees can help effectively tackle these challenges.

This article was created in collaboration with Valentina Rosca, former DPO of felyx, a scale-up located in the Netherlands.


[EC] European Commission (n.d.). A Europe fit for the digital age. Retrieved from:

Trends, challenges & evolution of security monitoring in the evolving digital landscape

Society is becoming increasingly dependent on digitized processes and systems ([NCTV19]). This trend is also apparent within organizations. As the world continues on its digitization revolution, cybersecurity should be, if it isn’t already, a core business requirement in order to protect critical business processes, assets and data from cyberattacks. And, unless your security monitoring strategy has been adapted in-line with this trend, there is a good chance that such capabilities are slowly eroding or becoming fragmented.

We spoke to Adwait Joshi, Director of Product Marketing for Azure Security, Microsoft, to discuss current trends and issues in the security monitoring space, potential solutions and what the future holds.

About the interviewee

Adwait Joshi is Director of Product Marketing for Azure Security at Microsoft. Adwait has 20+ years of experience in the field of technology, including in the areas of product engineering, networking and security.


Research ([Harv19]) shows that forty-four percent of organizations are expecting fundamental changes in their business models driven by digital disruption. Business IT operating models are also changing rapidly especially with the adoption of Cloud services. As cyber threats continue to evolve, security monitoring and incident response have become a pivotal component of a modern cybersecurity program.

The traditional approach to security monitoring is often no longer practical or sufficient in today’s multi-cloud world, compounded by an ever-increasing skills shortage ([Morg19]). Organizations are moving parts of their infrastructure to different cloud vendors, where there are different dedicated security monitoring solutions, which require their own flavor of tuning and monitoring. This results in fragmented and siloed security monitoring and does not allow for correlation across an organization’s environment. In addition, extra resources must be utilized to learn different security monitoring technologies as well as monitor these dispersed technologies. The major resource shortage is exacerbated by repetitive manual tasks utilizing precious time that could be better spent on more complex tasks.

A likely reason for this could be that traditional on-premise SIEM solutions are not well positioned to facilitate the Cloud migration journey many organizations find themselves in.

In this interview, we will discuss solutions for the issues described, including the option to deploy a SIEM in the cloud and leveraging SOAR products that are exploding in popularity because they automate much of the manual repetitive work SOC analysts are normally tasked with.

Interview with Adwait Joshi, Microsoft

KPMG: What changes are you seeing in the IT world and how does this affect the current state of security monitoring solutions?

Adwait: Organizations are embracing cloud and hybrid cloud strategies. That means a combination of public clouds and private clouds, which has increased the demands on data gathering. With that there are a few things to keep in mind:

  • From a security perspective, there is a lot more that needs to be processed, monitored and analyzed;
  • The attack surface is growing because of these hybrid strategies, multiple clouds, your existing on-premise data center, new IoT devices and new user mobility solutions. So, all in all, the devices, the infrastructure and the cloud infrastructure are adding complexity which increases the attack surface. This is something that CISOs need to consider in order to ensure that they have an integrated solution or full view of their enterprise.

With this growing digital space and digital footprint, specifically from a security monitoring perspective, organizations need to understand how they can get an integrated view, how they can have a consolidated solution that it will help give them a view across their entire enterprise.

This is the trend we see from an industry line perspective, everything is becoming more distributed, public cloud usage has increased and ultimately that is also driving organizations to use public cloud as a security platform as well. Gone are the days when people used to ask questions about public cloud and security associated with it. Now, CISOs have realized the advantages and benefits that a public cloud can offer, especially related to the scale and volume of data that can be processed, and the speed that can be used to help improve their own efficiencies for their security operations. And so we see public cloud being used as a security platform.


Figure 1. Yesterday’s view of siloed security monitoring which does not allow for correlation across an organization’s environment. [Click on the image for a larger image]

KPMG: What trends do you see in SIEM or SOAR technology?

Adwait: The challenges we see are all related to the trends that I mentioned earlier. These trends have to do with growing attack surfaces as data volume grows.

And so the challenges are:

  1. The complexity to manage solutions and get a consolidated view across the enterprise is growing rapidly. Having traditional strategies of on-premise software solutions that are or are not connected and having individual point solutions for different clouds, CISOs are telling us that that is not working well, and it is costly to set up and maintain individual solutions.
  2. Security analysts are inundated with the volume of alerts that they see. As a result, their efficiency is going down. They are not able to prioritize the threats because of false positives and large volumes of alerts. There are multiple reports that state that more than 40-50% of alerts are not processed because of the volume ([ESG20]). So how can we help improve that efficiency, how can we help empower SecOps team to quickly act and understand what is the high priority attack, high priority threat?
  3. Lack of security expertise because there are so many different solutions and attack surfaces, there is a lot to learn and there is a lot to protect. On the other hand, we do not have enough security resources in the market. Reports show that by 2021 there will be 3-4 million roles for security ([Morg19]).

Going into more detail regarding the trends of SIEM and SOAR technology, we see that more and more cloud solutions/technologies are being embraced because of the data requirements. Most vendors are now migrating their solutions to the cloud because of these requirements. For Microsoft, it was natural from the beginning to design a cloud native solution with Azure Sentinel. We decided to build it completely in the Azure platform and that gives us the ability to process data quickly without our customers having to think about how to scale their infrastructure or how to set up and maintain their infrastructure. We see that even traditional vendors are now offering some cloud services.

Furthermore, we see more and more usage of machine learning. We have built-in machine learning models that are trained in observing data that consists of more than 8 trillion signals every day. Based on that all those insights, we also train our own machine learning models. With Azure Sentinel, we use those models in such a way that security analysts don’t have to do anything extra. They can use those models within the product to gain knowledge, get quick insights and prioritize billions of signals. Making better and more use of machine learning therefore helps to empower and improve the efficiency and effectiveness of the security analysts.

All technologies, automation, orchestration and the traditional value of SIEM are coming together. Besides SIEM and SOAR, the User and Entity Behavior Analytics aspects are also being integrated and combined into these solutions. CISOs are looking for this type of combined approach to help make their SOC more efficient. They can have integrated, more cohesive solutions that give these capabilities.

KPMG: You mentioned a lack of security expertise as one of the challenges. Is this something you also see as not being completely resolved but eased into by freeing up resources to use machine learning and AI?

Adwait: It’s not the full solution, machine learning will not solve the situation of lack of expertise. That is a partnership between the security organizations and security vendors helping each other to create more training and certificates, which will again create more opportunities for people to learn. This is a long-term effort for the industry. On the other hand, machine learning and AI are there to help make your existing security more efficient and drive your business. It is important to understand what business results AI is driving. Microsoft uses AI internally, and the insights that we have are provided in Azure Sentinel. We want to save time cost for our clients by providing built-in machine learning models.

Apart from machine learning and AI trends, I also see another trend in increased usage of automation and orchestration within SecOps. Security analysts and even CISOs want to have automation and orchestration in place for effective incident response.

KPMG: What is in your view the unique thing about Azure Sentinel compared to more traditional SIEM and SOAR solutions?

Adwait: Azure Sentinel is a cloud native solution. Created on a proven public cloud platform. That helps us integrate Azure Sentinel with all other Azure services such as log analytics. There are also logic apps that provide you with automated playbooks. You can also make your own playbooks for incident response or develop playbooks to integrate with your existing solutions like ServiceNow or your firewalls. That is the benefit we can offer with public cloud. The other benefit with public cloud is that it does not require a setup or maintenance, and it scales dynamically to help address your needs. When data requirements are growing and the analysis grows automatically, Azure Sentinel grows and changes with you.

The other differentiator is the unique threat intelligence and built-in machine learning models that we provide. Microsoft processes more than 8 trillion threat signals per day. Imagine the insights we draw from them. They are all distributed in the services we offer. We provide robust threat intelligence, robust insights built by diverse signals. All the detection and hunting rules we provide are all based our own experiences and what we run inside Microsoft. Our security operators and our security analysts are always working on these queries. We provide our expertise to the customers and we provide our own security analysts, which is very efficient for our customers. With the built-in machine learning that we have in Azure Sentinel, we have seen the alert fatigue go down more than 90%. Reducing false positives, to just pinpoint prioritized alerts is the efficiency that we help drive with these built-in models.

Internal security analysts working within Microsoft SOC; we offer all of that expertise to our customers. We leverage all the activities that they are performing with their analysis and insights to constantly improve our detection and response capabilities. Combined with our open community that is available on GitHub where our partners, our customers are also contributing, not just with machine learning models but also with detection rules, threat hunting rules or even playbooks. We offer more than Microsoft security experts; we offer a community that helps improve your security operations.

Integration with your existing tools. More than 200-250 apps that you can integrate, and you can create playbooks to automate your incidence response. We also work with many solutions that customers might already have. Many customers today might have invested in existing solutions, but may not be able to fully switch to the Microsoft Ecosystem. We can work with that! For example, we can integrate Azure Sentinel with ServiceNow. Manage and monitor the incident with Azure and use the ticketing system of ServiceNow.

KPMG: What kind of journey do you see clients taking for example that already have a SIEM solution, such as Splunk, and would like to move to Azure Sentinel? Do you have any thought on how that would work?

Adwait: There are many scenarios, and each situation is different. There are organizations that really want to switch/migrate to Sentinel. One case is where we are working with customers who are in the process of migrating from their existing solution to Azure Sentinel. It is one approach that customers are taking for whatever reason. Then there are many other organizations that are thinking of it as side by side. As they grow their footprint in the cloud, they want to have a cloud solution. They are embracing Sentinel to get all their data from the cloud, and then their existing solution, which is on-premise, is used to collect on-premise data, connecting these two solutions.

KPMG: In this hybrid situation, is there something that works well for clients or do you think there is a need to move only to cloud?

Adwait: Good point! Our discussions were mostly about this strategy as part of the journey and not as the end strategy. We see more and more customers embracing cloud technology. This is therefore part of the journey to get the benefits of the cloud, but because of their existing investments (especially for large mature organizations), they may not be able to switch all their instances at once. So, moving to the cloud and Azure Sentinel is a gradual process.


Figure 2. Tomorrow’s view of security monitoring where visibility and correlation across an entire organization, including on-premise and multiple cloud environments, is possible. [Click on the image for a larger image]

KPMG: What are the challenges you see with customers that are now implementing Azure Sentinel, is there something that you see that is often difficult for this journey?

Adwait: Something that all customers need to think of is understanding the data sources they want to collect data from. They must have a strategy to connect those data sources to Azure Sentinel, especially if they have an existing on-premise solution. Then they have to think of a strategy on how they are going to connect all that data in the cloud, data normalization, what is the critical data you want to send into the cloud. A strategy should be put in place before you start configuring Azure Sentinel.


With organizations increasingly embracing cloud and hybrid cloud strategies it is harder to have a consolidated view across an organization. Alerts are increasing, and in different locations, and compounding the issue turns out to be a lack of expertise in some organizations. SOAR solutions available in products such as Microsoft’s Azure Sentinel can help alleviate some of this pressure by automating repetitive tasks, freeing up precious resources to be spent on tasks that require more deep thinking. Machine learning and User Entity and Behavior Analytics can also help to make a SOC more efficient if implemented well. These tools in combination with organizations and vendors working together to upskill resources will help to tackle these complex challenges in the long term.

What is “SIEM”?

Primarily a detection mechanism, a Security Information & Event Management tool ingests events and data flows from multiple sources such as networking devices, endpoints, active directory and security tooling (e.g. EDR, IDS), as well as data to enrich events and alerts like asset databases and Threat Intelligence feeds. This data is then used to identify non-compliance, for example and real time security incidents such as an attacker performing malicious actions on a workstation.

What is “SOAR”?

A Security Orchestration Automation and Response tool enhances detection capabilities and contextualization of incidents, as well as accelerating incident response. Security Operations Analysts are often stretched resources and SOAR aims to relieve this by tactically enhancing and streamlining workflows where human interaction is not needed, freeing up valuable time to focus on tasks that require more critical thinking. For example, a typical phishing alert could take an analyst an average of 15 minutes to investigate. A SOAR tool could automatically analyze the email, place the attachment in a sandbox and return all of that information to the analyst for review.

Another strength of SOAR is that it allows integration of multi-vendor tooling which generally does not directly interface with each other, bringing them together in an automated way. Being able to work across a multi-vendor security tooling ecosystem is important for organizations to avoid single vendor dependence, and is also very valuable for MSSPs who work with different organization tool sets.


A SIEM is focused on ingestion of data and detection mechanisms (finding the needle in the haystack), while SOAR is mostly focused on enriching the data and automating repeatable tasks related to the response of the incidents generated from the SIEM. While there are more and more standalone SOAR solutions, vendors are adding SOAR-like features to their SIEM, with some vendors launching solutions that fully encompass both SIEM and SOAR.


[ESG19] ESG (2019). Security Analytics and Operations: Industry Trends in the Era of Cloud Computing.

[Harv19] Harvey Nash / KPMG (2019). A changing perspective.

[Morg19] Morgan, E. (2019, 24 October). Cybersecurity Talent Crunch To Create 3.5 Million Unfilled Jobs Globally By 2021. Retrieved from:

[NCTV19] NCTV (2019). Cybersecuritybeeld Nederland.

Cyber security in mobility

Technological breakthroughs are affecting the mobility ecosystem at a rapid pace. The continuing developments within the mobility sector raise questions about data sharing, data privacy, and cyber security from a mobility perspective. It is of vital importance that we think carefully about how we can best prepare for the future, how we can organize our systems and which parties we work with, and how we work with them. Especially considering that the aspects of cyber are changing and the boundaries between physical and digital are becoming increasingly blurred.


We are on the verge of major and global changes. Technological breakthroughs from the so-called “fourth industrial revolution” are affecting our daily lives. These disruptive technologies and trends – such as the Internet of Things, automation, robotics, virtual reality and artificial intelligence – are changing the way we live and work. It is of vital importance that we think carefully about how we can best prepare for the future, organize our systems and which parties we work with ([Dams19]). This is, unsurprisingly, also the case for the mobility ecosystem.

Imagine a world where travelers move seamlessly from place to place. Where they stipulate their journey and travel preferences on an app, computer or kiosk, and are presented with journey choices according to their preferences with the best price/quality for their travel plan. Where public and private transport modes are fully integrated, where logistics and transportation providers co-operate to perform as efficiently as possible and where trucks don’t drive back to the distribution center without cargo. Where algorithms guide our decisions. Where payment happens automatically, using a processing method of choice. Where the transition from one mode of transport to another is straightforward, perfectly timed and effortless. Where there are plenty of on-demand travel options, and users have ready access to real-time journey information and an integrated journey planning platform providing on-the-go notifications, alerts and alternatives for unexpected events ([Thom17]). Where the future of mobility will contribute to the safety, accessibility and sustainability of mobility.

Sounds very promising. Where can I sign up? However, whereas the future of mobility offers us many advantages, it also raises questions. What about companies sharing my data to facilitate my journey (data privacy)? Can I fully trust the software that controls the steering of my car without a physical backup (cyber security)? Historically, technology has always had two faces. One is the perspective of progress and innovation. The other: a potential shift of vulnerabilities which may lead to disruption of the “steady, comfortable equilibrium/normal”. The World Economic Forum identified current trends and risks in their 2020 Global Risk Report. We couldn’t agree more on some of the top risks, such as: data fraud, information infrastructure breakdown and, last but not least – cyber-attacks ([WEF20]). Looking back at the previous revolutions in mobility, when the first cars started rolling from the assembly lines 100+ years ago, mobility in its “modern day form” became available for the masses. Nobody could truly foresee its benefits, or the potential issues such as traffic jams, parking permits and CO2 pollution. And here we are, on the verge of the next revolution; industry 4.0 meets the mobility ecosystem.


Figure 1. WEF – Global Risk Report 2020. [Click on the image for a larger image]

How do we plan for security with fast-paced digitalization, hyper connectivity and automation of mobility? And how do we manage trust in this complicated sector that – upon disruption – may lead to social, economic and physical disturbances?

Let’s explore the security challenges in the mobility ecosystem

We have taken the first three elements of the NIST cyber security framework to analyze the mobility ecosystem ([NIST20]):

  • Identify: know yourself, but especially know who is interested in you.
  • Protect: protect your critical services against anyone.
  • Detect: spot the “enemy” when they actually bypass the protective measures.


Figure 2. NIST CSF. [Click on the image for a larger image]

The framework also includes guidelines on how to respond and recover from an attack. We will not cover these topics in this article, but we will in a subsequent article:

  • Respond: how to react when the “enemy” bypasses protective and detective measures but gets noticed in time.
  • Recover: return the business back to usual when “shit hits the fan and the enemy compromised the estate”.

Identify: know yourself and know your enemy

Keeping that analysis in mind, we should start by determining the mobility “crown jewels”, the threat actors and align our security strategy for the mobility ecosystem.

The Mobility Crown Jewels: identify what matters most

The most valuable data, core processes and critical services which form the heart of an organization’s business function are commonly referred to as Crown Jewels. Organizations need to identify and protect these. Data streams running through and within the mobility ecosystem can be seen as a criticality for proper functioning. Another example: the electrification of the mobility fleet in the Netherlands inevitably introduces security and capacity problems to the electricity grid.

The Mobility Ecosystem: it is all about the weakest link

The mobility ecosystem is fundamentally changing and redefines the conventional mobility usage. Traditional market boundaries are being broken. The automotive sector, energy sector, public sector, transport and logistics sector – they all have to deal with these trends, but also with new business models and mobility concepts. Organizations today operate within connected ecosystems that includes their suppliers. If any part of the ecosystem is attacked, other members of the ecosystem are at risk, making the ecosystem as strong as its weakest link.

Knowing yourself is all about focusing on that what matters most, both to yourself and to others you depend on. In the protection thereof, it is essential to find the right balance in your security efforts.

Threat actors: know your enemy

It is important to know yourself, and keep the following in mind: “keep your friends close and your enemies even closer”, it is also very important to know your “enemies”. Do you really know who is interested in your business? Or who wants to disrupt it? Or who wants to steal that small – but relevant – piece of information that is called competitive advantage?


Figure 3. Mobility ecosystem. [Click on the image for a larger image]

Threat actors in cyberspace, and similarly in the mobility ecosystem, can be roughly categorized in the following categories ([Dams19]). Most hackers are referred to as script kiddies (e.g., inexperienced, usually young adolescent individuals or journalists that are looking for a juicy story, using known exploits and vulnerabilities). The impact of their cyber-attacks is, however, not to be underestimated. This is perfectly illustrated through the success of Mirai DDoS botnets which was the work of low-skilled script kiddies. The infamous Internet-of-Things botnet took down major websites via massive distributed Denial-of-Service using hundreds of thousands of compromised Internet-Of-Things devices ([Burs17]).

Another important digital threat actors category is called cyber-criminal gangs. These gangs mainly have a financial motivation. As always, Hollywood managed to blur the lines between imagination and potential real-life future scenarios with its blockbuster The Fate of the Furious from 2017, in which hackers used autonomous cars as attack vehicles. A little longer ago, the The Italian Job, released in 2003, showcased automotive hacking skills. Using and abusing the infrastructure to ensure a swift getaway of the famous Mini Coopers, loaded with millions worth of gold. It won’t be the first time that Hollywood’s sometimes wild and endless imagination predicted more or less the future… Actually, this is already happening today, as both carmakers and the cyber security industry have accepted that connected cars are as vulnerable to hacking as anything else linked to the internet ([Gree17]).

Last but not least, nation state actors have entered the cyber-crime arena. Ever since Edward Snowden, former NSA contractor, revealed the depth of techniques used by the United Sates, it has become clear how much nation state actors invest in dominating the digital world. The US Department of Homeland Security has been warning against cyber-attacks by groups such as Dragonfly for many years. This actor group targets networks in the EU/US belonging to businesses involved in critical national infrastructure as well as their suppliers. The targets range from small businesses to major corporations that are responsible for the generation, transport and distribution of electricity and have the potential to put a halt to the most important power sources of the mobility ecosystem, crippling society ([NCTV19]).

The context of threat is emerging even faster

The threats the mobility sector is facing are changing, from “harmless” stealing of sensitive consumer data to impactful remote interference with the engine, steering and anti(block) braking systems. The risks linked to the modern forms of mobility are an unwanted side effect of the wireless connectivity with external networks, often through a mobile network connection (long-range wireless signal hacks) ([Youn16]). Similarly, passengers have the ability to connect a USB device or smartphone through the information port (wired and indirect physical hacks) and wireless devices via tethering or via onboard WiFi/Bluetooth systems (short-range hacks). All these connections increase the attack surface for malicious actors, and this is only accelerated further with the hyper connectivity in the mobility ecosystem.

With the increase of automation within the newly developed forms of mobility, the number of functions these devices run, increases as well. The functions vary from headlight control to critical systems such as brake control. Just like any other system, compromising one function, can endanger the entire system through the Controller Area Network (CAN) bus which links a car’s various computers and information points together. Take the modern-day car for example: the mechanical linkage between the steering wheel and a car’s front wheels is replaced by an electric power system, “drive by wire”. After gaining access to the central bus, it becomes easy to control the car ([Cunn14]).

Protect: making sure to keep the bad guys out

Secure by Design: secure from the start

In software engineering terms, “Secure by Design” means that software has been designed from the outset to be secure, thinking about security at the start of the project and throughout. A mindset that is much needed in critical sectors where human lives are on the line. Where safety and security are strongly related; it goes without saying that mobility that is not secure will ultimately not be safe.

Physical = Digital

“Mobility as a Service (MaaS) is the integration of various forms of transport services into a single mobility service accessible on demand” ([Tran19]).

Zero Day attacks are aimed at commonly used system. Zero Day is a computer software vulnerability that is generally unknown to those interested in mitigating the vulnerability. Zero Day attacks in the mobility market will not only have an economic and social impact, but physically endanger mankind and cascade effects to other industries.

Imagine a scenario where a cyber-criminal/ terrorist alleged group finds a Zero Day vulnerability that may affect tens of thousands of cars in Europe…

Organizations are beginning to compete and rush for a strong position in the mobility ecosystem to capture the greatest commercial value. However, for this to happen in a secure fashion, policy makers and organizations must embrace a collaborative approach rather than pursuing fragmented or isolated developments of their own ([KPMG19]). The “rush/time to market” to gain competitive advantage inherently causes more security related issues, where the “Secure by Design” mindset will lose focus and significance. Just to shed a light how detrimental this loss of focus can be; as was ruthlessly illustrated by the recently published investigation report by the House Committee on Transportation & Infrastructure on the two crashes that killed 346 people aboard Boeing’s 737 Max ([Defa20]). The investigation outlined the “horrific culmination” of engineering flaws, mismanagement and a severe lack of federal oversight. The report, which condemns both Boeing and the Federal Aviation Administration for safety failures, emphasizing that Boeing prioritized profits over safety and that the agency granted the company too much sway over its own oversight ([Chok20]; [Levi20]).

Setting the standard for automotive cybersecurity

Traditional automotive safety and security standards have not sufficiently covered the topic of cybersecurity. The industry needed specific guidelines and standards for automotive cybersecurity. OEMs, ECU suppliers, cybersecurity vendors, governing organizations, and SME’s joined forces to compose a deep and effective global standard for automotive cybersecurity. Using four main working groups focusing on 1) risk management, 2) product development, 3) production, operation, maintenance, and decommissioning, and 4) process overview, the ISO/SAE 21434 draft was born ([UPST20]). The standard formulated a common language between the automotive actors, criteria for effective automotive cybersecurity, accepted industry levels of cybersecurity assurance and regulatory enforced standardization.

Connecting the future of mobility: how do we keep the rapidly expanding need for data communication in the mobility ecosystem secure?

The (future) mobility world drives the ecosystem into a demand for real-time secure data flows, data availability and – of vital importance – data that can be trusted. As the ecosystem has yet to settle on common technologies and standards, the timing is right to introduce the principles of Secure by Design and Privacy by Design.

The automotive world is divided into two camps. NXP, among others, has put its cards on ITS-G5, a brother of WiFi, where the devices set up a direct link. Companies such as Qualcomm, on the other hand, see more in C-V2X (cellular v2x), a technology based on 4G where connections can run both directly and via a cell tower. The European Commission has committed to direct connections instead of cellular technology as a legal basis for communicating vehicles ([Edel18]).


Besides 3G/4G/5G connected mobility, other exciting ways of mobility communications are making their way in the mobility domain. These innovations have the potential to even bypass the telcos; namely LiFi (Light ​Fidelity) and car-to-car wireless mesh networks. LiFi is a technology based on​ communication using ​light as a​ ​medium. LiFi has evolved over the past years and has been proven to be secure, efficient and can send data at very high rates and might make its appearance in the mobility industry in the near future ([Haas18]). Car-to-car wireless mesh networks are basically a web of WiFi networks. It is made up of local networks to which cars and infrastructure are directly connected (also peer to peer).


Figure 4. Overview of communication relationships between different actors in the mobility ecosystem ([KPMG19]). [Click on the image for a larger image]

Privacy by Design: what about data?

The continued increasing demand for data raises fundamental questions. Will anyone own data in the future? How can data be shared in a way that also respects the customer’s privacy and does not breach their permissions? The success of a mobility ecosystem will depend heavily on the assurance that users and businesses are able to trust that their data is being used responsibly and appropriately. Legislation to this effect can be found in:

Article 10, Dutch constitution: right to respect privacy

Article 17 GDPR: Right to erasure (“right to be forgotten”)

Delegated Regulation (EU) No 886/2013: “provision, where possible, of road safety-related minimum universal traffic information free of charge to users”

One of the distinguishing features of the mobility ecosystem will be the sheer amount of data it generates. Forecasts suggest that by 2025, the global datasphere will grow to 163ZB (i.e., a trillion gigabytes). That’s ten times the 16.1ZB of data generated in 2016 ([Mell18]). Gartner analysts estimate around 80% of enterprise data today is unstructured, meaning not held in structured databases ([Rizk17]).

In a mobility ecosystem, data will inevitably flow between the different players so that the right services can be offered at the right time. This means that ownership, and especially responsibility of the data, will also change as it moves through the ecosystem ([KPMG18]).

Just like Secure by Design, we refer to Privacy by Design: the incorporation of privacy by default, embedded into every standard, protocol, process and carrying priority within the organization. Although we are seeing that privacy in the mobility ecosystem is receiving more and more attention, it is not yet on the level of Privacy by Design (just like Secure by Design is not).

The mobility ecosystem needs to develop innovative and collaborative data exchange platforms as soon as possible. The focus should be on incoming data that is cleansed and anonymized through a system of digital IDs. User information is tagged to a digital identity, which should not be accessible on a personal level. Under this model, data ownership would be shared between the participants across an ecosystem ([KPMG18]).

Fortunately, we see promising initiatives emerging, such as the National Data Warehouse for Traffic Information (NDW). This Dutch organization is known for its enormous database of both real-time and historic traffic data. The NDW has 19 public authorities working together on collecting, storing and distributing traffic data. Data is used to provide traffic information, to ensure effective traffic management, and to conduct accurate traffic analyses. The objective is a better accessibility and traffic flow ([NDW20]). Adding more actors that are active in the mobility ecosystems to the above-described independent data exchange platform(s) could potentially provide a very interesting combination as a start. Besides national initiatives, we also see some movement at European level. All European Transport Ministers, the European Commission and current industry partners joined forces and established the Data Task Force on Connected and Automated Driving. Their goal is to take the first steps towards data sharing for safety-related traffic Information in the European Union ([KPMG18]).

Detect: spot the “enemy” when they actually bypass the protective measures

Detect is about developing and implementing the right measures (governance, people, processes and technology) to identify the occurrence of the “enemy” bypassing the protective measures. In this article we only focus on the collective measures.

All of the stakeholders in the ecosystem have to work on their own detection measures. In the cyber security world “assume compromise” is a recognized concept and despite protection measures, one day ecosystem players will be hacked. Dutch companies might not even be well prepared to detect and respond to such attacks ([Dams19]).

Even more so, with the increased complexity, interconnectivity and dependency on each other (also in view of the weakest link in an ecosystem) it becomes very challenging – if not impossible – to implement adequate measures.

In order to overcome this, it is important that the stakeholders in the ecosystem are working together on cyber topics through a Mobility Information Sharing and Analysis Center (MISAC).

A MISAC is an excellent way to collaborate with other organizations within the same industry to increase the digital resilience of the organizations. The MISAC is used to share security expertise, share insights on incidents to prevent escalation in the ecosystem, incident response, and ideally provides the security eyes and ears of the mobility ecosystem.

To set up a MISAC, years of experience at the National Cyber Security Centre (NCSC) can be leveraged, which has already established many ISACs and used their developed roadmap to set up the (M)ISAC ([NCSC20]). Examples of industries that already implemented an ISAC are – not limited to; Airport, Chemistry/Oil, Energy, Financial Institutions, Legal and Nuclear.

Respond & Recover

As referenced in the introduction, this article focuses on the first three elements of the NIST cyber security framework: Identify, Protect and Detect. In the next edition we will revisit Detect in more detail and cover the Respond and Recover functions.

Our vision for the mobility ecosystem

We are on the verge of major and global changes. Technological breakthroughs from the fourth industrial revolution are affecting our daily lives. It could be said that we are not in an era of change, but in a change of era. It is of vital importance that we think carefully about how we can best prepare for the future, organize our systems and which parties to work with ([Dams19]).

Stakeholders in the mobility ecosystem should acknowledge the increasing complexity of the system (where working together is a must; and where there is full dependency on the weakest link – third party risk), the need for knowledge to be shared (from internal company-restricted to non-competitive sharing), and the need for a true Secure by Design and Privacy by Design approach.

The importance of independent consulting/knowledge firms and industry consortia will grow. They should grasp the opportunity and use their broad knowledge, experiences and expertise, which they accumulated by working in various industries, and transfer this into the mobility ecosystem.

From isolation to hyper connected ecosystems

Organizations in the mobility chain depend on each other’s efforts to reduce risks to an acceptable level. To this end, joint measures must be taken that are suitable for the entire chain. In essence, our vision for the future is a world in which:

  • competitors work together to share data to mutual non-competitive advantage:
    • those organizations that attempt to keep data to themselves are unlikely to succeed – the network will simply be too big, too complex, and too open-sourced for this to be viable;
  • we will need to see ecosystems, or “federations of competitors”:
    • success will be about collaboration and cooperation – it will be about “co-opetition”!
    • joint effort in securing and monitoring the mobility ecosystem;
    • ensuring that interactions within the ecosystem stakeholders are valid, accurate and can be trusted;
  • we will need to see the creation of independent data exchange platforms – shared platforms into which data would flow and:
    • incoming data is cleansed and anonymized through a system of digital IDs;
    • the protection measures are data oriented (not system, database or network oriented – the “zero trust” principle has to be applied);
    • access is regulated for each and every individual piece of data (based on identity, context, etc.);
  • as said earlier, but worth repeating: a world in which Secure by Design and Data by Design are the “new normal”.


[Burs17] Bursztein, E. (2017). Inside the infamous Mirai IoT Botnet: A Retrospective Analysis. Cloudflare. Retrieved from:

[Chok20] Chokshi, N. (2020, September 16). House Report Condemns Boeing and F.A.A. in 737 Max Disasters. Retrieved from:

[Cunn14] Cunningham, W. (2014, February 11). Power steering shifts to electric. Retrieved from:

[Dams19] Dams, T. et al. (2019). Gaming the new security nexus. Retrieved from:

[Defa20] Defazio, P. et al. (2020). The design, development & certification of the Boeing 737 MAX. Retrieved from:

[Edel18] Edelman, P. (2020, October 22). EU kiest kant van NXP-kamp voor communicerende auto. Retrieved from:

[Gree17] Greenberg, A. (2017, August 16). A Deep Flaw in Your Car Lets Hackers Shut Down Safety Features. Retrieved from:

[Haas18] Haas, H. (2018). LiFi is a paradigm-shifting 5G technology. Elsevier, 2018(3), 26-31. Retrieved from:

[KPMG18] KPMG (2018). Mobility 2030 – Data rules. Retrieved from:

[KPMG19] KPMG (2019, December). Mobility 2030: Are you ready to rise to the challenge? Retrieved from:

[Levi20] Levin, A. (2020, September 16). Sweeping Failures and Insufficient Oversight Led to Boeing 737 Max Crashes, Scathing House Report Finds. Retrieved from:

[Mell18] Mellor, C. (2018, October 23). Igneous debuts DataThings for unstructured data management. Retrieved from:

[NCSC20] NCSC (2020). Samenwerking in een ISAC. Retrieved from:

[NCTV19] NCTV (2019). Cyber Security Assessment Netherlands. Retrieved from:

[NDW20] NDW (2020). National Road Traffic Portal. Retrieved from:

[NIST20] NIST (2020). NIST Sybersecurity Framework. Retrieved from:

[Rizk17] Rizkallah, J. (2017, June 5). The Big (Unstructured) Data Problem. Retrieved from:

[Thom17] Thomas, E. et al. (2017). Reimagine Places: Mobility as a Service. Retrieved from:

[Tran19] Transit Protocol (2019, June 19). What is Mobility as a Service? Retrieved from:

[UPST20] Upstream (2020). ISO/SAE 21434: Setting the Standard for Automotive Cybersecurity. Retrieved from:

[WEF20] World Economic Forum (2020). The Global Risks Report. Retrieved from:

[Youn16] Young, M. (2016, December 12). The Big Problem with Connected Car Security That No One is Talking About. Retrieved from:

Improved payment security implies innovation in authentication

Innovation in the online payment sector should go hand in hand with innovation in authentication. There is a battle against cyber payment fraud, which requires improvement in authentication methods used. With regulation like the second Payment Service Directive, regulatory requirements have been introduced, which set minimal security requirements. Multi-factor authentication and the dynamic linking of authentication codes are actions that should be looked into. At the same time, the customer journey should be optimized without losing sight of security concerns. How can cyber security of payment systems be ensured by using the right multi-factor authentication methods? This article contains advice from professionals regarding the protection of online payment systems.


Multiple innovative online payment services and business models have emerged, encouraged by new regulation, such as the second Payments Service Directive (PSD2) in Europe. Additionally, there has been a significant increase in cyber fraud in the banking and payments sector. According to research from Betaalvereniging Nederland, the damage of cyber fraud in internet banking more than doubled from €3.81 million in 2018 to €7.94 million in damages in 2019 in the Netherlands alone ([Beta20]).

This has been accompanied by a spike in e-commerce and the rise of mobile-first, resulting in a gradual transition in the way consumers use online services. Customer centricity drives this. But at the same time, it raises concerns about security and the way consumers authenticate themselves online for these new innovative services.

Cyber criminals are professionalizing in the payment domain. The European Payments Council stated this in its recent report, highlighting the current threats and fraud trends ([EPC19]). Among the trends noted, authentication and activation are common themes. It is very important to stay ahead of the way financial institutions regulate their authentication to ensure security. This is because the right authentication method can add an additional layer to the protection of sensitive data. In 2018, PSD2 was introduced, which set requirements for authentication and created new innovative possibilities in the payment sector. However, cyber criminals have also become more inventive in finding accessible ways of bypassing authentication methods, especially when using SMS-based authentication. Hackers trick phone companies into swapping a phone number to a new device, so-called SIM swapping. In this way, hackers do not need the actual device to receive the two-factor authentication codes for both financial and other services that the consumer uses. Without the right authentication methods, data breaches can occur easily and lots of money and sensitive information can be lost. Therefore, payment security goes hand-in-hand with innovation in authentication for all online services.

While companies in the financial sector are under pressure to ensure the safety of the sector and their consumers, this article takes a broader perspective. When protecting the payment and banking industry and ensuring secure online authentication, we need to look beyond this industry. This is because safe online authentication is relevant to all online services to protect the (personal) information in these services. This article starts with exploring the opportunities for innovation in payments, followed by security-centric journey optimization, cyber fraud and the right direction of authentication. In this article, questions concerning security will be discussed, such as: “How do we balance the opportunities for enhanced customer experience with the security concerns of cyber fraud?”, “Why is the online two-factor authentication using SMS, both within and out of the banking sector, a key concern?” and “Why does the SMS-based two-factor online authentication remain common practice in the industry for online authentication, while it brings a lot of security risks?”

PSD2 offers opportunities and security

In 2018, PSD2 was introduced and gave options to use new online payment system services. The introduction of PSD2 in 2018 increased the opportunities for innovation in the online payments space even further than was already the case ([EPoC15]). PSD2 made it possible to give access to payment accounts to third parties, with the consumer’s explicit consent, resulting in access to payment accounts anywhere and anytime.

New business models are, therefore, emerging, resulting in new companies, while other existing companies are developing in-house payment capabilities so that elements of their payment value chain can be integrated into their business. A plethora of FinTechs have emerged, acting at the intersection of consumers, retailers and banks, offering third-party payment services (e.g. initiation of payments on behalf of customers). For example, payment service providers (PSPs) or banks such as Adyen are active in the market between consumers and retailers, offering a variety of services to facilitate online payments. In parallel, large technology companies (BigTech), such as Google, Apple and WeChat are also entering the world of payments. This results in innovation opportunities and optimization of services.

As well as enabling innovation, PSD2 aims to tackle fraud in online payments ([EC19]). The regulation strengthens the security requirements for electronic payments. “Strong customer authentication”, as well as the subsequent dynamic linking of authentication codes to payment transaction details, are key elements of PSD2 security requirements. PSD2 mandates that strong customer authentication is used. This is defined by the European Parliament and European Council as “an authentication based on the use of two or more elements categorised as knowledge (something only the user knows), possession (something only the user possesses) and inherence (something the user is) that are independent, in that the breach of one does not compromise the reliability of the others, and is designed in such a way as to protect the confidentiality of the authentication data“([EPoC15]). In this way, PSD2 regulation ensures that companies offering payment solutions must ensure at least two-factor authentication of their customers. The dynamic linking requires that an authentication code for each transaction needs to be unique, as it can only be used once. It will be specific to the transaction amount and recipient, and both are clear to the payer during authentication ([Saee18]). These elements aim to reduce the risks of cybercrime-based fraud.

PSD2, in this way, presents opportunities for innovation, optimization of customer experience and security improvements. However, the optimization of customer experience and the security of online payments ecosystem as a whole will be a challenging line to walk.

The importance of authentication to cyber payments fraud

Case study example illustrating the risks of SMS-based two-factor authentication

A cyber criminal calls the phone company in your name and acts as if you are moving to a new phone that requires a smaller SIM-card, or so the cyber criminal says. The helpdesk asks for a few personal details: name, date of birth, address. The cyber criminal makes sure to update “your” address while on the phone and in no time the cyber criminal has received “your” new SIM-card. Only they are not you, they are committing fraud…

How did this cyber criminal know the victim’s name, date of birth and address? Or their mobile provider? Cyber criminals use social engineering techniques to receive as much information on the person as possible. This cyber criminal could easily find the victim’s name and date of birth on social media. Having attained access to a set of email addresses and passwords hacked last year, the victim’s home address was also easily found within a forgotten email inbox, along with a vast number of other items, including their mobile service provider and even a copy of their passport. Now able to access someone else’s One-Time Passwords (OTPs) and Mobile Transaction Authorization Number (MTAN) sent through an SMS, via this new SIM-card, this cyber criminal was ready to execute fraud and interfere in the payment transactions of the victim. The data breach that occurs from such SIM-based attacks may result in the loss of a large amount of money or sensitive personal data.

This case study example highlights this growing trend and the dangers of SMS-based two-factor authentication. Whereas in the past this type of SIM-swap fraud was used to call premium numbers, today the risks are greatly increased as companies use codes via SMS to implement two-factor authentication. This type of fraud has cost British banks around £91 million (€108.2 million) over the last 5 years ([Wrig19]).

The safety and effectiveness of two-factor authentication is therefore highly dependent on the authentication solution deployed across use cases, i.e. not just in the banking and payments sector but also to other online services in which consumer data is stored.

Not all two-factor authentication is equal in their security. The use of SMS-based authentication, although notably the most commonly used type online, should be phased out. A weak link like SMS-based authentication can lead to a snowball effect on individual and company cyber security.

Optimizing security and the customer experience: a fine line

There is an ever-increasing demand from general consumers for an excellent customer experience ([Wich19]). As companies try to improve all aspects of their customers’ experience and journey, it is increasingly important to seamlessly integrate actions that are not central to the product/service experience. As such, the seamless integration of payment transactions within a purchase or service is becoming the norm. An example of this is an electronic “wallet” and loyalty services offered in tandem. Uber, who recently received an electronic money license from DNB, is a prime example of the value chain integration ([FD19]). In the Uber Cash payments experience no user action is required to pay for the service at the point of use. This can also be enabled by the Uber Cash wallet, which is an option alongside the seamless credit card option. The payment becomes invisible – this fundamental shift in the perception of transactions requires a shift in the approach to security.

Not all customers are aware of the importance of the steps they need to take to ensure security. Often, customers are not aware of the high possibility of a data breach, when not using the correct cyber security measures. Every extra step in a customer’s journey, including the payment and associated cyber security steps, can create an opportunity for the customer to stop their purchase and leave the online retailer website – a missed sales conversion. The benefits of decreased cyber fraud risk are hardly visible to the consumer. This is a fine line that established banks, new players as well as e-retailers and the rest of the online ecosystem are walking today – secure strong customer authentication is an absolute must, but how can this fit seamlessly in the customer’s experience? Multi-factor authentication should be seen not as about adding steps but as about adding the right steps. As an example, when using multi-factor authentication, such as face recognition, this adds a step that is more secure than an SMS code, but that does not necessary feel like additional steps from the consumer perspective.

Strangely enough, online authentication methods have changed little when compared to the rate of digital, technical and payment-related innovation over the last few years. Most people are still accustomed to logging in using a username and a password. However, according to research from Duo, due to a combination of an increase in (media) awareness and the introduction of regulation such as PSD2, two-factor authentication is increasingly used by consumers online. The most common form is a combination of a “knowledge” element (login details, i.e. something only you know) and a “possession” element (a code via SMS, i.e. something only you have) ([Duo19]). The “inherence” category, for example the use of biometric data such as a fingerprint, is beginning to become the standard, especially where mobile devices provide this functionality.

The continued implementation of SMS-based authentication for two-factor authentication in the online context is a prime example of a well-intentioned attempt to find the balance between facilitating a streamlined consumer journey and enhancing security. When consumers are already on their mobile phone or have it with them all the time, it seems easy to also receive an authentication code via their phone and SMS. However, as the emerging (cyber) threats highlight, including those in the aforementioned case study, SMS-based two-factor authentication does not imply guaranteed cyber security. These cyber threats are explained in more detail in the next section.

Cyber fraud and the payments ecosystem

Phishing, social engineering and Advanced Persistent Threats (APTs – a long term form of cyber fraud that uses multiple combined techniques) are a handful of possible cyber threats in the payment ecosystem. When it comes to SEPA transactions, the primary cause of fraud losses remains the use of impersonation and deception scams and attacks to compromise data, according to the European Payments Council. Increasingly, cyber criminals are targeting mobile devices along with IoT devices. Attacks can include, for example, Banking Trojans, a form of malware targeted at the victim’s online and/or mobile banking activities. Transactions can be tampered with or user authentication credentials could be stolen. Additionally, phishing for activation codes for mobile payment and authentication apps will likely increase over the coming period, using techniques such as social engineering.

Card payments are influenced by a trend towards more technically sophisticated fraud techniques such as APTs, characterized by their long-term, persistent and often long-undetected nature. At the same time, it is noticeable that cyber criminals are also increasingly employing simpler fraud types such as reporting a card lost or stolen to gain access. This is sometimes combined with social engineering, which is on the rise along with phishing attacks. The case study earlier shows an example of a combined approach, with SMS-based authentication as the central weak link.

Within businesses, these attacks often come in combination with malware. Research of Internet Crime Complaint Center (IC3) reported that Business Email Compromise(BEC)/Email Account Compromise (EAC) – a scam targeting businesses or individuals compromising email accounts through social engineering or computer intrusion techniques to conduct unauthorized transfer of funds – cost American businesses adjusted losses of over $1.7 billion (€1.56 billion) in 2019 ([IC319]).

The European Payments Council also highlights that social engineering attacks and phishing are also trending to include not only consumers, retailers and small-medium sized enterprises, but also employees, financial institutions and payment infrastructures. This leads to an increase in authorized push payments fraud in which the payment is authorized by the victim.

This context highlights the fact that cyber fraud in payments is influenced across an ecosystem of online services within the payment and banking industry and that this type of authentication has a significant impact on the security measures.

Who is prepared and who is responsible?

On an optimistic note, larger banks seem to be better prepared. When it comes to cyber fraud and the deployment of multi-factor authentication, the process of phasing out SMS-based authentication has been initiated by multiple larger consumer banks, such as ING. Biometric options for multi-factor authentications, such as touch ID or face recognition, have also been introduced within mobile banking apps. This is a good example of banks taking advantage of innovations in mobile phones and security, introducing an inherent element (something you are) to their multi-factor authentication options. However, small banks, such as private banking and savings banks, are traditionally lagging behind in these developments and many still use SMS-based authentication. This, in combination with their customers generally being limited in their awareness of cyber risks, leaves smaller banks more vulnerable to the cyber risks arising from phishing attacks and social engineering.

The case study example includes the phishing of customers to gain access to their mailbox that is configured without two-factor authentication. Even if the mailbox had been secured with SMS-based two-factor authentication, acquisition of the SIM-card as per the case study could also have ensured access to the victim’s mailbox too. Sensitive documents often reside in personal mailboxes. These types of documents are often used by organizations to verify the identity of a caller, such as copying passports or bank account statements. The bypassing of verification controls resulting from too much focus on customer friendliness enables “attackers” to request duo SIM-cards that are then sent to an alternative postal address. With the additional SIM-card, the attacker can start the request of SMS-based authentication and/or password resets in order to gain access to payment and online banking portals. From the control and process perspective of the mobile service provider this is all legitimate, as the verification of personal sensitive information was successful. This demonstrates that while some financial institutions are prepared, others outside this sector have a role to play in security.

Currently, banks are mostly seen as the responsible organizations for cyber fraud. This is a result of the position of banks as the keepers of the consumers’ money. The current payment space, however, is inherently an ecosystem by nature. Not only banks, but also e-retailers, email service providers, mobile service providers and so forth are active in this cross-sector ecosystem of security. The responsibility of the choice of authentication technique should be shared across all of them.

Our experience on solving cyber incidents in this context

The result of the fraud in payment systems often results in loss of large amounts of money and sensitive data. There is a need to understand how to deal with emerging technologies embedded in business processes and customer journeys. This turns cyber security challenges into real business enablers. Having a multidisciplinary cyber response team can assist in preparing and adequately dealing with cyber security risks when these materialize. In KPMG’s experience, forensic and cyber experts combined have capabilities that can be deployed in tandem as a response to cyber threats. Dealing with a cyber security breach is often a deep technical topic that involves technical specialists and technical jargon. A functional approach is to bring it back down to essential management questions, to be able to adequately respond to a cyber security breach and provide the insights that matter.


Figure 1. Three phases identified to adequately deal with a cyber security breach. [Click on the image for a larger image]

The first phase, readiness, is to focus on adequate design, implementation and limit the damage of breaches by effectively planning for possible cyber security incidents. One of the main aspects is to assess the current state of an incident response. This helps address the gaps so that processes can be improved, tools can be implemented, strategic partners can be selected and staff can be trained.

The second phase, response, is where the cyber response team comes into action. It is all about mitigating the impact of a cyber incident. Activities such as forensic data acquisition, system and log file analysis and incident management are to be executed in this phase.

In the third phase, post-breach, the root causes need to be identified and mitigations must be successfully applied. In regulated industries or if breach concerns personal data, the GDPR regulation imposes legal obligations to report the cyber incident to the regulator. Strengthening an organization’s detection and response capability are important aspects to consider in this regard. These could be temporarily strengthened as cyber criminals may still have presence in the organization’s network.

An outlook to the future of authentication mechanisms

Due to the methods used by cyber criminals, not using multi-factor authentication is like closing the door of your home but not locking it. Crimes such as business email compromise, data theft and phishing can be prevented when changing to multi-factor authentication. Multi-factor authentication requires dynamic linking of two pieces of unique personal data that ensures someone’s identify. This is more detailed than the SMS two-factor authentication, as discussed in this article.

A move towards multi-mode biometrics is expected in the future. This combines identifiers, such as fingerprints and gait analysis (behavior) to produce high-reliability biometrics matching. Eventually, silent authenticators will be incorporated and one will not have to actively present oneself, but will be automatically recognized based on machine learning algorithms which will combine multiple biometric factors and gait analysis. Such behavioral biometrics can help prevent identity theft, protecting the property of individuals. The combination of something only one person can “know”, and typical behavior, can help exclude criminals from the lives of people.

The awareness of having innovative authentication methods across industries is crucial for cyber security.

Conclusion: innovation everywhere

Cyber criminals are innovating. The online payment sector is innovating. There should be a wider movement to innovate online authentication methods. It is advised to integrate new technologies in the cyber security sector into the complete journey of the consumer. Besides innovation in the services offered online, this also means innovation in the security that comes with these new services.

The embedding of multi-factor authentication and the dynamic linking of authentication codes to the payment transaction details in PSD2 legislation, is promising. It provides the backbone for this next phase of authentication and the battle against cyber fraud in payments. Authentication presents itself as a bottleneck issue in this process, especially when society moves further and further into a fully digital life.

Innovation in payments must therefore go hand in hand with innovation in authentication. Implementing the core principles of multi-factor authentication must be placed in context of the entire customer journey: how does security impact the consumer (online) and how safe is the authentication choice that has been made in this context? To protect the payments industry, protection of consumers across platforms is needed, both within and outside the banking and payments industry. The first step is through the elimination of SMS-based authentication and the true implementation of multi-factor authentication, including the combinations of factors that are less vulnerable to fraud.


[Beta20] Betaalvereniging Nederland (2020, April 16). Veel meer phishing en bankpasfraude in 2019. Retrieved from:

[Duo19] Duo Labs (2019). State of the Auth. Retrieved from:

[EC19] European Commission (2019, September 13). Frequently Asked Questions: Making electronic payments and online banking safer and easier for consumers, Retrieved from:

[EPC19] European Payments Council (2019). 2019 Payment Threats and Fraud Trends Report. Retrieved from:

[EPoC15] European Parliament and of the Council (2015). Directive (EU) 2015/2366. Official Journal of the European Union. Retrieved from:

[FD19] FD (2019). Uber krijgt vergunning DNB voor uitvoeren elektronische betalingen. Retrieved from:

[IC319] Internet Crime Complain Center (2019). Internet Crime Report. Federal Bureau of Investigation. Retrieved from:

[Saee18] Saeed, N. (2018, December 19). Understanding Dynamic Linking in PSD2. Twilio Blog. Retrieved from:

[Wich19] Wichers Hoeth, N. (2019). A digital walk to remember. KPMG. Retrieved from:

[Wrig19] Wright, M. & Horton, H. (2019, November 30). Bank Customers Lose £9.1 Million In Five Years To ‘Sim Swap’ Scams. The Telegraph. Retrieved from:

Pseudonimisering binnen de AVG

De Algemene Verordening Gegevensbescherming (AVG) noemt pseudonimisering als maatregel om tot een persoon herleidbare gegevens op passende wijze te beschermen. Maar hoe doe je dat? En hoe sterk moet de oplossing zijn? En wat levert de toepassing je als organisatie op? Geldt er een verlicht AVG-regime voor het omgaan met gegevens als deze gepseudonimiseerd zijn? Veel organisaties worstelen met deze vragen. Dit artikel gaat in op deze vragen aan de hand van een aantal praktijkvoorbeelden.


De AVG vereist de inzet van technische en organisatorische maatregelen om persoonsgegevens op passende wijze te beschermen. Pseudonimisering wordt daarbij genoemd als mogelijke maatregel. Maar wat is pseudonimiseren en welke eisen worden eraan gesteld? En wat zijn de gevolgen voor de bruikbaarheid van de gegevens? In het maatschappelijk verkeer worden de termen pseudonimiseren, anonimiseren, de-identificeren, maskeren en coderen regelmatig door elkaar heen gebruikt of gecombineerd tot prachtige termen als ‘pseudo-anonieme’ of ‘dubbel gepseudonimiseerde key-coded’ data als resultaat. Termen die de suggestie wekken dat het wel goed zit met de privacybescherming. Het is uiteraard de vraag of dat daadwerkelijk het geval is. Dit artikel beschrijft eerst wat pseudonimiseren is en wat het verschil is met anonimiseren. Daarna komen aan de hand van twee cases de mogelijkheden en beperkingen van pseudonimiseren aan de orde. Tot slot worden enkele veelbelovende ontwikkelingen op het gebied van privacybeschermende maatregelen besproken.

Achtergrond en ontstaansgeschiedenis

Mede gedreven door de AVG heeft pseudonimisering als beveiligingsmaatregel voor het beschermen van persoonsgegevens de afgelopen jaren een grote vlucht genomen. Wereldwijd is een toenemend aantal aanbieders actief, bijvoorbeeld Privacy Analytics (Canada) en Custodix (België), die oplossingen aanbieden voor het pseudonimiseren van privacygevoelige gegevens. Recent heeft ook Google de bèta voor een Cloud Healthcare API for de-identifying sensitive data ([Goog]) gelanceerd. Ook in Nederland zijn meerdere dienstverleners actief, waaronder ZorgTTP en Viacryp. Daarnaast is er een toenemend aantal publicaties zoals [ENIS19] waarin best practices voor de technische opzet en mogelijke toepassingsgebieden worden beschreven. De overheid heeft in het kader van het eID-stelsel en de Wet digitale overheid ([Over]) die medio 2020 in werking zal treden, in het rijbewijs een voorziening opgenomen ([Verh19b]) voor het verstrekken van gegevens op basis van polymorfe pseudoniemen ([Verh19a]). Kenmerkend voor deze vorm van pseudonimiseren is dat iedere afnemer een ander pseudoniem krijgt voor dezelfde natuurlijke persoon. De kans op het doorbreken van de pseudonimisering wordt hiermee sterk beperkt. Pseudonimiseren is daarmee uitgegroeid van een specialistische en exotische toepassing naar een steeds breder beschikbaar en in toenemende mate gestandaardiseerd beveiligingsinstrument.

Wat zegt de AVG over pseudonimiseren?

Voordat we naar de techniek, praktijkvoorbeelden en ontwikkelingen op het gebied van pseudonimiseren kijken, is het van belang de juridische verankering te verkennen.

Pseudonieme gegevens zijn tot de persoon herleidbaar. De AVG en ook de voormalige EU-werkgroep 29 (thans European Data Protection Board) beschouwen pseudonieme gegevens als tot de persoon herleidbare gegevens ([EC14]). Deze vaststelling is van belang omdat nog wel eens wordt gesteld dat gepseudonimiseerde gegevens niet herleidbaar zijn. EU-werkgroep 29 stelt in dit kader echter dat pseudonieme gegevens als zodanig niet als anonieme gegevens kunnen worden gezien. De inzet van aanvullende maatregelen is vereist om met name de indirecte herleidbaarheid tot natuurlijke personen uit te sluiten. Daarmee moeten gepseudonimiseerde gegevens worden beschouwd als identificerende of identificeerbare gegevens waarop de AVG van toepassing is.

De AVG definieert pseudonimisering in artikel 4 lid 5 ([EU16b]) als:

het verwerken van persoonsgegevens op zodanige wijze dat de persoonsgegevens niet meer aan een specifieke betrokkene kunnen worden gekoppeld zonder dat er aanvullende gegevens worden gebruikt, mits deze aanvullende gegevens apart worden bewaard en technische en organisatorische maatregelen worden genomen om ervoor te zorgen dat de persoonsgegevens niet aan een geïdentificeerde of identificeerbare natuurlijke persoon worden gekoppeld;

Uit deze definitie blijkt dat in een pseudonieme dataset:

  • persoonsgegevens niet meer aan specifieke betrokkenen kunnen worden gekoppeld zonder het gebruik van aanvullende gegevens;
  • technische en organisatorische maatregelen vereist zijn om de herleidbaarheid van pseudonieme data naar identificeerbare of geïdentificeerde natuurlijke personen met aanvullende gegevens te voorkomen.

Met andere woorden, eerst moet bij het genereren van de pseudoniemen de link tussen de identificerende gegevens behorende bij een natuurlijke persoon en de daarvan afgeleide pseudonieme gegevens worden doorbroken. Dat kan al met het in bewaring geven van een eenvoudig sleutelbestand bij een ander, maar doorgaans wordt hier gebruikgemaakt van cryptografische algoritmes. Vervolgens moet worden voorkomen dat ongeautoriseerde herleiding plaats kan vinden door verrijking van de gepseudonimiseerde gegevens met aanvullende gegevens. De aanvullende maatregelen zijn daarbij in het algemeen gericht op het voorkomen van ongeautoriseerde toegang tot en verspreiding van de data. In een goede pseudonimiseringsoplossing moet zowel het genereren van pseudoniemen als het verrijken van gepseudonimiseerde data afdoende zijn geadresseerd.

DPIA als startpunt voor het inrichten van een pseudonimiseringsoplossing

Aan de eisen die de AVG stelt aan het pseudonimiseren van gegevensverwerkingen kan op meerdere manieren worden voldaan. Van geval tot geval moet worden beoordeeld welke combinatie van maatregelen als passend kan worden beschouwd ([EC14]). De vraag of het bijvoorbeeld mogelijk moet zijn om terug te kunnen naar de identificerende gegevens of juist niet, is een vraag die in dit kader gesteld moet worden. De beoordeling kan het beste worden gedaan in de vorm van een gegevensbeschermingeffectbeoordeling, of Data Protection Impact Assessment (DPIA), zoals vereist in artikel 35 van de AVG ([EU16b]). Op basis van de verwerkingsgrondslag, de aard van de verwerking en de daaraan verbonden risico’s, kan de afweging worden gemaakt tussen het beoogde detail van de te verwerken gegevens, de impact op de persoonlijke levenssfeer van betrokkenen en de maatregelen om de risico’s te mitigeren.

Wat levert pseudonimiseren op?

AVG-overweging 28 stelt dat de toepassing van pseudonimisering op persoonsgegevens de risico’s voor de betrokkenen kan verminderen en de verwerkingsverantwoordelijke en verwerkers kan helpen om hun verplichtingen inzake gegevensbescherming na te komen. Het gaat daarbij echter met name om het verminderen van de directe herleidbaarheid.

Indirecte herleidbaarheid

Ten aanzien van de mate waarin de gegevens na pseudonimisering indirect herleidbaar zijn, wordt in de opinie over anonimiseringstechnieken ([EC14]) gesteld dat moet worden nagegaan in hoeverre herleiding door herleidbaarheid (singling-out), koppelbaarheid (linkability) en deduceerbaarheid (inference) van de gegevens redelijkerwijs kan worden uitgesloten. Daar wordt nadrukkelijk gesteld dat pseudonimisering als zodanig voor geen van deze criteria indirecte herleiding uitsluit.

Wat zegt de toezichthouder over pseudonimiseren?

Reeds ver voor de publicatie van de opinie van [EC14] en de invoering van de AVG heeft het College bescherming persoonsgegevens (CBP) nagedacht over pseudonimiseren en de mate waarin dit leidt tot het beperken van de herleidbaarheid. De voorwaarden die het CBP voor pseudonimiseren heeft geformuleerd ([CBP07]), hielden reeds rekening met zowel de directe als indirecte herleidbaarheid van de gepseudonimiseerde gegevens. Het CBP stelde dat:

Bij toepassing van pseudonimisering is geen sprake van verwerking van persoonsgegevens, indien aan de volgende voorwaarden is voldaan:

  1. er wordt (vakkundig) gebruik gemaakt van pseudonimisering, waarbij de eerste encryptie plaatsvindt bij de aanbieder van de gegevens;
  2. er zijn technische en organisatorische maatregelen genomen om herhaalbaarheid van de versleuteling (“replay attack”) te voorkomen;
  3. de verwerkte gegevens zijn niet indirect identificerend, en
  4. in een onafhankelijk deskundig oordeel (audit) wordt vooraf en daarna periodiek vastgesteld dat aan de voorwaarden a, b en c is voldaan.

Eén van de uitgangspunten is voorts dat de pseudonimiseringsoplossing op heldere en volledige wijze dient te worden beschreven in een actief openbaar gemaakt document zodat iedere betrokkene kan nagaan welke garanties de gekozen oplossing biedt.

Gelden deze eisen nog onder de AVG?

In de achterliggende periode is meermaals gebleken dat bij pseudonimisering de uitdaging niet zozeer ligt bij de initiële inrichting van gegevensverwerking, maar bij de governance over een langere periode. Het blijkt in de praktijk een uitdaging voor organisaties om bij het toevoegen van nieuwe variabelen (datapunten) opnieuw de herleidbaarheid van de dataset te onderzoeken. Voorbeelden hiervan zijn de Diagnose Behandel Combinatie Informatiesysteem (DIS)-verwerking van de Nederlandse Zorgautoriteit ([AP16]) en de Routine Outcome Measurement (ROM)-verwerking door Stichting Benchmark GGZ (SBG) ([AP19b]). In beide gevallen bleek door uitbreidingen van de dataset in de loop van de tijd de indirecte herleidbaarheid van de gegevens zodanig toegenomen dat niet langer kon worden gesproken van redelijkerwijs niet-herleidbare gegevens.

Deze uitspraken impliceren niet zozeer het intrekken van de eerdere eisen voor pseudonimisering, maar vormen veel meer de bevestiging dat pseudonimisering als zodanig niet tot een anonieme dataset leidt zoals gesteld in opinie [EC14]. Het criterium om vast te stellen of sprake is van het verwerken van tot de persoon herleidbare gegevens, is in de AVG niet wezenlijk anders dan in de Wet bescherming persoonsgegevens (WBP). Nog steeds moet worden nagegaan of het, rekening houdend met de benodigde moeite en beschikbare middelen, redelijkerwijs (on)mogelijk is om gegevens te herleiden naar een natuurlijke persoon. Dat staat los van het al dan niet pseudonimiseren van de gegevens. In die zin blijven de eisen een bruikbaar uitgangspunt om na te gaan of sprake is van een verwerking binnen of buiten het kader van de AVG. Daarnaast kunnen de eisen helpen bij het beoordelen van toepassingen van pseudonimisering. Daarbij is de beoordeling niet gericht op het vaststellen of sprake is van het uitsluiten van herleidbaarheid, maar gericht op het tot een acceptabel niveau reduceren van de herleidbaarheid binnen een verwerking met privacygevoelige gegevens. Voor reductie van het risico is immers aandacht voor het beperken van zowel de directe als de indirecte herleidbaarheid noodzakelijk.

De relatie tussen pseudonimiseren en anonimiseren

Zoals in de vorige paragraaf is toegelicht, leidt pseudonimiseren als zodanig niet tot anonieme data. Pseudonimiseren is een van de mogelijke maatregelen gericht op het beperken van herleidbaarheid. In samenhang toegepast leiden deze mogelijk tot anonieme data. Die anonimiteit komt echter wel met een prijs: verlies aan onderscheidend vermogen in de dataset vanuit het perspectief van de gebruiker. In de HIPAA Safe Harbour-richtlijn voor het de-identificeren van medische gegevens wordt gesteld dat de-identificatie ten koste gaat van de bruikbaarheid van de data.

Volgens [Bart14] is hier sprake van de ‘Inconvenient Truth’ (zie figuur 1) dat het bereiken van de ideale situatie met optimale privacybescherming enerzijds en optimale waarde van de data anderzijds onmogelijk is. Bij het toepassen van technieken voor de-identificatie moet daarom steeds de afweging worden gemaakt tussen het beoogde gebruik en de kwaliteit van de informatie enerzijds en de privacybescherming anderzijds. Die afweging kan leiden tot een positionering van de verwerking binnen dan wel buiten de AVG. Zo kan de wens bestaan om productiedata met persoonsgegevens voor testdoeleinden te gebruiken wegens de representativiteit van de dataset. Als hiervoor echter geen toestemming van de betrokkenen of een andere AVG/grondslag voorhanden is, dan moeten de data geanonimiseerd worden. Het anonimiseren gaat echter ten koste van de representativiteit. Mogelijk kunnen niet alle testcases worden uitgevoerd, bijvoorbeeld omdat de postcode is geaggregeerd naar een regiocode of omdat de geboortedatum is omgezet naar een leeftijdsklasse. Daarnaast bestaat het risico dat de getroffen maatregelen de indirecte herleidbaarheid in onvoldoende mate beperken waardoor de testdata toch als herleidbaar worden gezien.


Figuur 1. Weging privacybescherming en datakwaliteit ([Bart14]). [Klik op de afbeelding voor een grotere afbeelding]

Zijn anonieme data eigenlijk nog wel haalbaar?

Uit het voorgaande blijkt dat de lat voor anonimiteit in de praktijk zodanig hoog ligt dat de mogelijkheid om individuen te onderscheiden in een dataset, gelijk wordt gesteld aan herleidbaarheid van de gegevens.

  1. Een toenemend aantal publicaties bewijst dat de ideale situatie uit figuur 1 niet haalbaar is. Keer op keer blijkt het mogelijk om individuen in schijnbaar geanonimiseerde datasets te herleiden door de datasets te verrijken met aanvullende gegevens. Aansprekende voorbeelden hiervan zijn studies zoals die naar het herleiden van de Netflix-prijsdataset met openbare censusdata ([Nara08]) en het promotieonderzoek van [Koot12] waar in de Nederlandse context schijnbaar niet-herleidbare medische data door verrijking met openbare CBS-data konden worden herleid. Recente publicaties als die van [Mour18] en [Roch19] tonen aan dat door de toename van openbaar beschikbare data, kennis en technische middelen steeds minder datapunten uit de geanonimiseerde set benodigd zijn om individuen te herleiden. Volgens [Cala19] kan alleen al het (unieke) patroon van aan een persoon gekoppelde datapunten leiden tot herleiding.
  2. De Autoriteit Persoonsgegevens (AP) vereist inmiddels dat verwerkingsverantwoordelijken en verwerkers aantoonbaar, juist en actief geavanceerde privacybeschermende technieken als K-anonymity ([Swee02]) toepassen ([AP19b]). Herleidbaarheid dient daarbij eerder absoluut dan redelijkerwijs uitgesloten te worden op basis van de criteria van herleidbaarheid, koppelbaarheid en deduceerbaarheid conform [EC14]. In beslissing op bezwaar [AP19a] tegen de eerder in dit artikel genoemde SBG-uitspraak geeft de AP aan dat een vergelijking met het Breyer-arrest ([EU16a]), waarin een minder absolute maatstaf voor niet-herleidbaarheid wordt voorgestaan, niet opgaat voor datasets waar een groot aantal andere datapunten is gekoppeld aan de pseudoniemen.

Voorbeeld van anonieme data

Ondanks dat het steeds lastiger blijkt om data te anonimiseren, zijn er wel voorbeelden te noemen van het verwerken van anonieme data. Zo is het Centraal Bureau voor de Statistiek (CBS) in de Wet op het Centraal bureau voor de statistiek ([Over03]) aangewezen als organisatie voor het produceren van statistieken voor beleid en onderzoek en kan het CBS in Nederland als maatgevend worden gezien als het gaat om het toepassen van technieken voor het anonimiseren van gegevens. Daarvoor wordt in Europees verband met zusterorganisaties ontwikkelde programmatuur voor Statistical Disclosure Control (SDC) zoals µ-argus en Tau-argus ingezet ([CBS20]). Met deze programma’s kan de mate van herleidbaarheid in te publiceren datasets worden beperkt tot een aanvaardbaar minimum. De inzet van deze programmatuur is echter niet triviaal. Statistische kennis en specifieke training voor gebruik van de software zijn vereist.

Pseudonimiseren: techniek en modellen

Nu de relatie tussen anonimiseren en pseudonimiseren duidelijk is geworden, wordt hierna een voorbeeld gegeven van pseudonimisering in Nederland. Daarvoor is het van belang kort stil te staan bij de techniek en de operating models. Pseudonimisering kan in verschillende vormen worden toegepast. Ga voor de keuze voor de specifieke uitwerking na of deze:

  1. een open of gesloten karakter moet hebben;
  2. omkeerbaar of onomkeerbaar moet zijn;
  3. een eenmalige of structurele omzetting van gegevens vereist;
  4. voor één specifieke of voor meerdere organisaties zal worden toegepast;
  5. de mogelijkheid voor omzettingen tussen gescheiden pseudonieme deelverzamelingen vereist (bijvoorbeeld bij multicenterstudies);
  6. in eigen beheer of met hulp van een externe dienstverlener moet worden uitgevoerd.

De uitkomst van deze afweging kan van geval tot geval verschillen en kan leiden tot de inzet van verschillende technieken. Het voornaamste doel van iedere oplossing moet het voorkomen van het ongeautoriseerd doorbreken van de pseudonimisering zijn. Bepalend voor een goede uitwerking van dat doel is de wijze waarop (cryptografisch) sleutelmanagement en functiescheiding zijn georganiseerd. De functiescheiding moet daarbij zodanig zijn ingericht dat wordt afgedwongen dat iedere actor de beschikking heeft over slechts een van de volgende elementen:

  1. de identificerende data (ID-data in figuur 2);
  2. het cryptografische sleutelmateriaal;
  3. de gepseudonimiseerde data.

Alleen als een afdoende scheiding is aangebracht tussen deze elementen, kan het ongeautoriseerd doorbreken van de pseudonimisering effectief worden voorkomen.


Figuur 2. Functiescheiding. [Klik op de afbeelding voor een grotere afbeelding]

Normen en praktijkrichtlijnen

Lange tijd was ‘ISO 25237 – pseudononimisatietechnieken’ een van de weinige normen op het gebied van pseudonimiseren. Inmiddels zijn de ‘NEN 7524 – pseudonimisatiedienstverlening’ en ‘ISO 20889 – de-identification techniques’ beschikbaar. Daarnaast verschijnen er steeds meer guidelines zoals die van ENISA ([ENIS19]) en de Personal Data Protection Commission Singapore ([PDPC18]). Ook sectorspecifiek zijn er richtlijnen voor praktische toepassing van de-identificatie, zoals het IHE Handbook De-identification ([IHE14]). Daarmee wordt het steeds beter haalbaar voor organisaties om een goede oplossing in te richten.

Case: pseudonimisering voor de risicoverevening

Zorgverzekeraars hebben de opdracht om te concurreren op prijs en kwaliteit. Wegens de in Nederland geldende acceptatieplicht is de verwachte schadelast per verzekeraar niet gelijk. Het Zorginstituut berekent daarom jaarlijks per zorgverzekeraar de vereveningsbijdrage per zorgverzekeraar op basis van de Regeling Risicoverevening ([Over18]) bij de Zorgverzekeringswet. Daarmee worden verzekeraars gecompenseerd voor onevenredige schadelast in de verzekerdenpopulatie en wordt een ‘level playing field’ gecreëerd waarbinnen de verzekeraars met elkaar kunnen concurreren. Om deze berekening te kunnen uitvoeren is een grote hoeveelheid (gevoelige) gegevens benodigd. Figuur 3 geeft een overzicht van de organisaties die een rol hebben bij de gegevensverwerking in het kader van de risicoverevening. Jaarlijks worden honderden miljoenen datarecords verwerkt binnen het stelsel. De verwerking kent een grondslag in de Zorgverzekeringswet. Hierin is expliciet opgenomen dat voor het doel van de risicoverevening de verwerking van medische persoonsgegevens en het burgerservicenummer noodzakelijk is.


Figuur 3. Stelsel Risicoverevening. [Klik op de afbeelding voor een grotere afbeelding]

Aan de linkerzijde staan de organisaties die input leveren voor het vereveningsmodel. Via pseudonimiseringssoftware worden jaarlijks gegevens aangeleverd aan enerzijds het Zorginstituut voor het berekenen van de vereveningsbijdrage; anderzijds worden gegevens geleverd aan jaarlijks te contracteren onderzoeksbureaus die in opdracht van het ministerie van Volksgezondheid, Welzijn en Sport belast zijn met onderhoud en doorontwikkeling van het vereveningsmodel. Na gebruik van de data worden deze eerst in een kortetermijnarchief geplaatst. Tot slot wordt het CBS voorzien van de data voor statistische doeleinden. De groene vlakken laten zien hoe een burgerservicenummer (BSN) wordt omgezet naar verschillende pseudoniemen voor verschillende afnemers.

Omdat het College Bescherming Persoonsgegevens ([CBP07]) de gegevensverwerking als een van de meest gevoelige verwerkingen in Nederland heeft bestempeld, zijn uitgebreide maatregelen getroffen om de persoonlijke levenssfeer van de betrokkenen te beschermen. Naast onomkeerbare pseudonimisering van de direct identificerende gegevens wordt voor de indirect herleidbare gegevens generalisatie toegepast in de vorm van aggregatie en het coderen van gegevens naar klassen. Figuur 4 beschrijft de functionele keten waarlangs gegevens gepseudonimiseerd worden. Een lokale pseudonimiseringsmodule leest het aangeboden bronbestand in en brengt na controle van de aangeboden gegevens eerst een scheiding aan tussen de direct en indirect identificerende gegevens. Op beide datadelen vindt vervolgens een bewerking plaats: respectievelijk pre-pseudonimisering en generalisatie. Het resulterende pseudo-ID en datadeel worden vervolgens voor definitieve pseudonimisering aangeboden aan de pseudonimiseringsdienstverlener. Deze dienstverlener fungeert als Trusted Third Party die uit oogpunt van de eerdergenoemde functiescheiding enkel toegang krijgt tot het pseudo-ID-deel. Het datadeel is door middel van PKI versleuteld voor de eindontvanger. Na definitieve pseudonimisering worden beide delen via een ontvangstmodule opgehaald door de eindontvanger. In de module worden de afzonderlijke delen samengevoegd, waarna het resultaatbestand wordt aangeboden. Het resultaat van deze operatie is een effectieve doorbreking van de relatie tussen brongegeven en gepseudonimiseerd afgeleide. Geen van de partijen kan zonder samen te spannen met een van de andere partijen de keten doorbreken.


Figuur 4. Operating model voor onomkeerbare pseudonimisering. [Klik op de afbeelding voor een grotere afbeelding]


De grootste succesfactor van de pseudonimisering voor de risicoverevening is dat regelmatig aandacht aan wordt gegeven aan zowel de technische als de organisatorische maatregelen. De methodebeschrijving van het pseudonimiseringsalgoritme is openbaar en voorziet in functies op het gebied van sleutelbeheer waarmee sleutels en toegepaste encryptiestandaarden kunnen worden vervangen. Ook interoperabiliteit is in de methode belegd, waardoor het mogelijk is om gegevens over te dragen aan andere aanbieders die de methode ondersteunen. Om ervoor te zorgen dat de data alleen voor legitieme doelen door daartoe geautoriseerde gebruikers toegankelijk zijn, heeft het ministerie een beleid voor datagovernance ontwikkeld. Het beleid voorziet in maatregelen en afspraken met betrekking tot opslag, transport, toegang en verspreiding van de data en wordt jaarlijks geëvalueerd. Daarbij wordt voor alle transacties in het stelsel vastgesteld of er wijzigingen in de specificaties zijn en of deze impact hebben op de herleidbaarheid van de data.


Er is een aantal veelbelovende ontwikkelingen gaande met de belofte om het gebruik van gevoelige gegevens op grote schaal te verenigen met een privacyvriendelijke opzet.

Synthetische data

Bij synthetische data wordt een afgeleide gemaakt van een (real-world) dataset met behoud van statistische eigenschappen. Het voordeel is dat er geen sprake is van herleidbaarheid naar individuen in de set omdat er een compleet nieuwe dataset wordt gegenereerd met fictieve personen in plaats van een afgeleide van de oorspronkelijke set. Het nadeel is echter dat de techniek nog voornamelijk in de context van wetenschappelijk onderzoek wordt toegepast, nog niet volwassen is en niet op alle vragen past. Extreme waarden in de dataset (uitbijters) kunnen bijvoorbeeld verloren gaan. Bij fraudedetectie wil je die juist zien. Voor het genereren van representatieve synthetische data is een real-world tegenhanger nodig waarop het algoritme dat de synthetische data moet genereren, wordt getraind. Het risico op herleidbaarheid van deze oorspronkelijke set door verspreiding en ongeautoriseerde verrijking van de data kan wel worden ondervangen door een synthetische set openbaar te maken. Het samenstellen van het origineel en/of de trainingsset uit diverse databronnen zou bij een Trusted Third Party kunnen worden belegd. Die rol kan in de praktijk belegd worden bij organisaties als het CBS, maar ook bij private partijen.

Secure Multi Party Computation

Secure Multi Party Computation is een verzameling technieken waarmee data afkomstig uit verschillende bronnen in geëncrypteerde vorm kunnen worden samengevoegd en bewerkt. Alleen het resultaat op populatieniveau wordt opgeslagen in de vorm van bijvoorbeeld regressiecoëfficiënten. Er ontstaat geen permanente samengevoegde dataset. De samengevoegde set wordt met de techniek van Shamir secret-sharing opgebouwd bij een Trusted Third Party die verwerkersovereenkomsten heeft gesloten met de aanleverende databronnen. Omdat de samengevoegde set alleen tijdelijk, in memory en bovendien in versleutelde vorm leeft, is geen sprake van gebruik van herleidbare data buiten het mandaat waarmee deze verzameld zijn. Er is sprake van verenigbaar gebruik.


Pseudonimiseren als zodanig leidt niet tot anonieme data. Organisaties moeten zich afvragen in hoeverre de heilige graal van anonieme én tegelijk betekenisvolle data haalbaar is. De lat voor anonimiteit ligt hoog. Herleidbaarheid dient absoluut uitgesloten te worden op basis van de criteria van herleidbaarheid, koppelbaarheid en deduceerbaarheid. In de praktijk moet een afweging worden gemaakt tussen privacybescherming en het beoogde gebruik van de data. Dat maakt dat verwerkingen binnen de kaders van de AVG moeten opereren. Pseudonimiseren kan daarbij een krachtig middel zijn om het risico op herleidbaarheid binnen een dataset te verminderen. De verwerkingsverantwoordelijke en verwerker(s) kunnen daarmee aantoonbaar voldoen aan de vereiste om passende technische en organisatorische maatregelen toe te passen.

Er is een toenemend aantal normen en richtlijnen beschikbaar voor het pseudonimiseren van gegevens. Deze kunnen helpen om tot een robuuste opzet te komen van pseudonimisering binnen een verwerking. De belangrijkste aspecten die belegd moeten worden, zijn functiescheiding, cryptografisch sleutelmanagement en het op transparante wijze beschrijven van het gevolgde proces en de daarbij geldende afspraken.

In Nederland is de risicoverevening een voorbeeld van het op grote schaal pseudonimiseren van gevoelige gegevens in een stelsel waarbij veel actoren zijn betrokken. De overheid werkt ondertussen aan opschaling in het kader van eID en de Wet digitale overheid.

Secure Multi Party Computation en synthetische data zijn technieken in ontwikkeling die een waardevolle toevoeging lijken te bieden in de continue afweging tussen het beoogde gebruik van data en het beschermen van de persoonlijke levenssfeer van hen op wie de data betrekking hebben.


[AP16] Autoriteit Persoonsgegevens (2016). AP: NZa mag diagnosegegevens uit DIS beperkt verstrekken. Geraadpleegd op:

[AP19a] Autoriteit Persoonsgegevens (2019). Beslissing op bezwaar. Geraadpleegd op:

[AP19b] Autoriteit Persoonsgegevens (2019). Rapport naar aanleiding van onderzoek gegevensverwerking SBG. Geraadpleegd op:

[Bart14] Barth Jones, B. & Janisse, J. (2014). Challenges Associated with Data-Sharing: HIPAA De-identification. Geraadpleegd op:

[Cala19] Calacci, D. et al. (2019). The tradeoff between the utility and risk of location data and implications for public good. Geraadpleegd op:

[CBP07] College bescherming persoonsgegevens (2007). Pseudonimisering risicoverevening. Geraadpleegd op:

[CBS20] Centraal Bureau voor de Statistiek (2020). About sdcTools: Tools for Statistical Disclosure Control. Geraadpleegd op:

[EC14] European Commission: Article 29 Data Protection Working Party (2014). Opinion 05/2014 on Anonymisation Techniques. Brussel: WP29. Geraadpleegd op:

[ENIS19] ENISA (2019). Pseudonymisation techniques and best practices, Recommendations on shaping technology according to data protection and privacy provisions.

[EU16a] Europese Unie (2016). Uitspraak HvJ EU: Patrick Breyert. Bundesrepublik Deutschland, C-582/14, 19 oktober 2016, ECLI:EU:C:2016:779.

[EU16b] Europese Unie (2016). Verordening (EU) 2016/679 betreffende de bescherming van natuurlijke personen in verband met de verwerking van persoonsgegevens en betreffende het vrije verkeer van die gegevens en tot intrekking van Richtlijn 95/46/EG (algemene verordening gegevensbescherming). Brussel. Geraadpleegd op:

[Goog] Google (z.j.). Cloud Healthcare API for de-identifying sensitive data. Geraadpleegd op

[IHE14] IHE IT Infrastructure Technical Committee (2014). IHE IT Infrastructure Handbook De-Identification. Geraadpleegd op:

[Koot12] Koot, M.R. (2012). Concept of k-anonymity in PhD thesis “Measuring and predicting anonymity”. Geraadpleegd op:

[Mour18] Mourby, M. et al. (2018). Are ‘pseudonymised’ data always personal data? Implications of the GDPR for administrative data research in the UK. Computer Law & Security Review, 34.

[Nara08] Narayanan, A. & Shmatikov, V. (2008). Robust de-anonymization of large sparse datasets. In: Proceedings of the 2008 IEEE Symposium on Security and Privacy (pp. 111-125). Washington, DC: IEEE Computer. Geraadpleegd op:

[Over] (z.j.). Wet digitale overheid. Geraadpleegd op:

[Over03] (2003, 20 november). Wet op het Centraal bureau voor de statistiek. Geraadpleegd op 16 februari 2020 op:

[Over18] (2018, 24 september). Regeling Risicoverevening, Zorgverzekeringswet. Geraadpleegd op 16 februari 2020 op:

[PDPC18] Personal Data Protection Commission Singapore (2018). Guide to Basic Data Anonymisation Techniques.

[Roch19] Rocher, L. et al. (2019). Estimating the success of re-identifications in incomplete datasets using generative models. Nature Communications, 10, 3069. Geraadpleegd op:

[Swee02] Sweeney, L. (2002). k-anonymity: a model for protecting privacy. International Journal on Uncertainty, Fuzziness and Knowledge-based Systems, 10(5), pp. 557-570.

[Verh19a] Verheul, E. (2019). The polymorphic eID scheme – combining federative authentication and privacy. Logius. Geraadpleegd op:

[Verh19b] Verheul, E. (2019). Toepassing privacy enhancing technology in het Nederlandse eID. IB Magazine, 6, 2019. Geraadpleegd op:

The new security nexus

The geopolitical context of security is changing rapidly – as China rises, the US becomes more protectionist, and the EU is struggling to flex its geopolitical muscle – and at the same time the technological base of virtually every form of security infrastructure is going through revolutionary, and often little understood, innovations – from Chip manufacturing to AI to 5G. We are now at the cross-roads of managing it. How can our open society remain strong in the face of foes and its own fragility, without sacrificing – and instead leveraging – its openness?


We are on the verge of large, global changes. Technological breakthroughs from the fourth industrial revolution are affecting our daily lives. Meanwhile, Western society is being rivalled by strong competitor Asia, and especially China. How does Europe avoid being squashed between America First and the Chinese Dream? How do we leverage the innovations from Artificial Intelligence and the Internet of Things without getting bogged down in geopolitics around 5G or ASML exporting EUV machines to China?

This article discusses the new security nexus of security, geopolitics and digitalization. As we will show, these topics are heavily intertwined. China’s New Generation Artificial Intelligence Development Plan offers an outlook for the People’s Republic to become the global leader in Artificial Intelligence (AI) by 2030, and their Made in 2025 strategy is aimed for China to become the new global manufacturing, cyber, science and technology innovation superpower. History teaches us that those who set the technical standards will dominate the world (see box “Developing technology is key in the power shifts”). Add into the mix China’s Belt and Road Initiative, which encompasses over 1 trillion USD in investments (see box “Belt and Road Initiative (BRI), the facts”), establishing China’s influence on the Eurasian content by investing in all kinds of physical trade infrastructure, such as dry ports, shipping routes and railways. We can conclude that China plans to re-establish both a digital and a physical silk road.

Developing technology is key in the power shifts

Throughout history, those who defined the technical standards, ruled the world ([Wijk19]). In the 19th century the United Kingdom ruled the technical standards and in the 20th century the United States. Even the Netherlands ruled the seas by virtue of its unparalleled skills in maritime engineering during the 17th century. China is currently taking over this role with their investments in Artificial Intelligence and 5G. China plans to be the leader in AI by 2030 through their New Generation Artificial Intelligence Development Plan ([SCoC17]).

Meanwhile, this hegemonic shift from West to East motivates other countries to join the power game. This is especially apparent in the cyber security domain, where countries such as Russia, Iran, Israel, North Korea and of course America and China are increasingly active. Nation state and organized crime driven cyber-attacks are not only aimed at financial gains, but lately more and more on gaining political influence, acquiring intelligence, preparing for hybrid warfare or disrupting enemy infrastructure. Dutch companies and governments are well prepared to prevent cyberattacks, but they don’t dare to face the inevitable truth that they will be hacked. They are ill-prepared to detect and respond to such attacks.

The latter is particularly worrying in times where our social dependencies on digital infrastructure is rapidly increasing. Consider examples such as AI managing factories based on sensor data, autonomous cars or connected cars based on 5G and image recognition, algorithmic profiling of civilians, social credit systems based on face recognition. These examples will change our lives drastically, if not today then tomorrow. How do we manage trust and security in these complicated digital infrastructures that – upon disruption – may lead to social disturbance?

We are now at the cross-roads of managing it. How can open society remain strong in the face of foes and its own fragility, without sacrificing – and instead leveraging – its openness?

We are convinced that hegemonic and technological changes present us with threats and show us our weaknesses, but we are moreover convinced that these changes offer us opportunities to build on our strengths as a Dutch and European society and benefit from the uncertainty these big changes cause. Common wisdom has it that you can’t solve new problems with old methods. So, we will need new strategic leadership to effectively deal with the new security nexus. In fact, every important historical moment is marked by these sorts of shifts to new social models, which expand in velocity and complexity well past what the current ways of thinking can handle. Our predicament is no exception. And usually the source of the greatest historical disasters is that so few people at the time either recognize or understand the shift ([Ramo09]). If any state has learned this lesson, it is China: by failing to understand the new networks, technologies and norms of global security of the mid-19th century, imperial China, as what had been the largest economy in the world for centuries and an untouchable regional hegemon for longer, was largely destroyed by relatively small European powers in a matter of years. We are, once again, on the verge of such a shift.

But now the cards have changed. For the first time since the fall of the Berlin Wall, Western society is faced with a successful rival model: China. As the European Commission said in its recently published China strategy ([EC19]): “China is, simultaneously, […] a cooperation partner […], a negotiating partner […], an economic competitor in the pursuit of technological leadership, and a systemic rival promoting alternative models of governance.” Moreover, the US has shifted its geopolitical course dramatically. President Donald Trump has already said he does not intend to pick up Europe’s security bill for much longer. Between America First and the Chinese Dream, Europe seems to be stuck in the middle.

Belt and Road Initiative (BRI), the facts

Only six years after it was launched, it now encompasses over 1 trillion US dollars in investments. 1 trillion US dollars roughly equals Dutch GDP, net worth of Apple, Amazon or Microsoft, twice the EU Connectivity Strategy and ten times the US Marshall Plan.

China is investing in all kinds of trade infrastructure, such as dry ports, shipping routes and railways. It does so in more than 60 countries, in every continent, representing over two-thirds of the world population, covering land, sea and even cyberspace.

Most of China’s investments go to Western Europe, but since BRI, Central, Eastern and Southern Europe have been getting a lot of attention from China. Examples of investments are: financing a 1.1 billion US dollar railway between Budapest and Belgrade; buying a major stake in the Port of Piraeus; signing up of Italy as a BRI-partner, the first belonging to the G7; and setting up the 16+1 Platform – a diplomatic forum to engage Central and Eastern European states outside of the EU’s reach.

Cycles of conversion

Haroon Sheikh, researcher at the Netherlands Scientific Council for Government Policy (WRR), puts the geopolitical and the technological aspects of disruptive changes in perspective by pointing out the cycles in hegemonic, technological and socio-cultural change (Figure 1, [Shei19]). These cycles tend to follow a pattern:

  • every 20 years, a new generation changes society by pushing new cultural and social values;
  • every 40 years, the workings of our societies and markets change fundamentally by virtue of technological breakthroughs;
  • every 80 years, a new hegemonic power takes the lead in the geopolitical world order.

The exciting thing is that we have entered an era in which all three disruptive cycles are in a phase of transition, simultaneously.


Figure 1. Convergence of cycles accelerates change ([Shei19]). [Click on the image for a larger image]

This causes rapid, large and unpredicted changes. The geopolitical context of security is changing rapidly – as China rises, the US becomes more protectionist, and the EU is struggling to flex its geopolitical muscle – and at the same time the technological base of virtually every form of security infrastructure is going through revolutionary, and often little understood, innovations – from Chip manufacturing to AI to 5G. When we look at the current controversy regarding 5G and Huawei, we see these three cycles shifting right under our noses. The US and China are aware that who owns the fourth industrial revolution, will probably dominate global networks of power for decades to come, and will be able to push socio-cultural norms regarding freedom of information, privacy and security.

Technological cycle

To illustrate the technology cycle, we explore three perspectives of the changing world: hyper-connected ecosystems; blurring lines between physical and digital worlds; and the impact of algorithms and AI.

Perspective #1 From splendid isolation to hyper-connected ecosystems

Companies now operate in a complex world of hyper-connected ecosystems. Competitive advantage is often no longer based on the ownership of certain assets. The source of differentiation – and thereby economic value – rather comes from having access to the assets, by having a solid and strategic position in hyper connected ecosystems. Boundaries between organizations are fuzzy, and supply chains are integrated. Even innovation is often a joint process.

The positive effect is that a range of new innovations has unfolded. However, in this hyper connected world, companies have become far more dependent on their partners. In nearly every aspect of their processes.

Perspective #2 Physical = digital

In a nutshell: the fourth industry revolution is characterized by technologies that start blurring the lines between physical, digital, and biological spheres. The Internet of Things (IoT) takes a central stage in the fourth wave. Many objects, ranging from cars to buildings and from watches to thermostats, are now connected 24/7. The IoT grows with exponential pace and leads companies into a new reality with massive opportunities. 5G is at the core of this exponential growth, supporting all sorts of devices to connect to the internet at low cost in high-density areas.

Perspective #3 Algorithms guide our decisions

Artificial Intelligence is a game changer in many ways and brings the world innovations in many domains. It accelerates the fourth industrial revolution, and many companies feel the urge to jump on the bandwagon. This is understandable, as the “winner takes all”-effect may be strong in this domain. First-movers have a strong advantage as they feed their AI systems with sample data sooner than late-joiners. Late-joiners therefore have to go through the full learning cycle themselves. According to VNO-NCW ([VNO18]) first-movers have the advantage of providing better services sooner; they are the preferred choice over late-joiners.

The technological antagonist

All three perspectives suffer from the same antagonist: cyber security. Many leaders in both the private and public sector fully understand the importance of cyber security for their business outcomes and objectives. The stakes are high. Not only in terms of the risk of interruption of vital services following attacks by malicious groups, but also in terms of digital espionage or theft of intellectual property. Experts estimate ([Lee18]) that commercial espionage may endanger economic growth to an amount of 55 billion euros and up to 289,000 jobs in the EU. According to a global survey by Harvey Nash and KPMG ([Harv19]), held with over 3,500 Chief Information Officers (CIOs) in over 100 countries, 32% experienced a major cyber-attack in the past two years (only 22% in 2014).


Figure 2. Number of CIOs that reported experiencing a major cyber-attack in the past two years ([Harv19]). [Click on the image for a larger image]

The motives behind these cyber-attacks vary. Some attacks aim for financial gains. The People’s Republic of Korea (North Korea) has, according to the United Nations Security Council ([UNSC19]), gained over 2 billion US dollars by cyber-attacks on the financial industry to fund their Weapons of Mass Destruction programs.

Another category involves disrupting the enemy. We have for instance witnessed how Iran attacked the Saudi Arabian Oil Company (Saudi Aramco) in 2012, wiping the hard disks of 30,000 computers ([Bron13]). Another example is Stuxnet, a virus that aimed to disrupt the Iranian nuclear program in Natanz ([Zett14]).

These cyber weapons can also be a means to gain political influence. Hackers of the Russian external intelligence agency SVR (going by the name Cozy Bear) have been accused of attacking the United States Democratic National Committee in 2015 to influence the 2016 elections.

Closely related to this is gaining intelligence. Edward Snowden, former NSA contractor, and website Wikileaks have uncovered many intelligence operations from the United States and the United Kingdom. Cases included spying on Belgian telco Belgacom and German Chancellor Merkel.

Last but not least, cyber weapons can be deployed for hybrid warfare. Conflicts between states take place largely below the legal threshold of an open armed conflict, with the integrated use of means and actors, aimed at achieving certain strategic goals ([Grap19]). These means frequently include cyber weapons or fake news ([ANV18]). For example, The Dutch Military Intelligence has, according to journalist Huib Modderkolk ([Modd19]), supplied the United States with telecom intel from a Dutch marine vessel equipped with specialized NSA equipment. This intel is used to conduct targeted killings with the help of the location of SIM cards, blending conventional weapons with cyber intelligence.

The face of (cyber) security is changing

In the traditional perspective on security, one would know the players and understand their relationship with other parties. In the digital era, this has changed. The lines are blurring between three perspectives:

  • Coalitions are multidimensional: friends may be part-time enemies
    • Israel has been accused of deploying very sophisticated malware against Swiss hotels where the P5+1 conversations concerned the Iranian nuclear deal between US, UK, Germany, France, Russia, China and the EU with Iran ([Gibb15]).
    • Recently, Israel has been accused by The Guardian ([Holm19]) of deploying sophisticated devices near the White House to spy on cellular networks likely used by Trump and his staff.
  • Private morphs into public: large corporates may have even more influence than governments
    • Collaborations between state and non-state actors on cyber espionage are common practice, especially in China and Russia. However, also the United States have also been caught mixing private and public matters: the NSA echelon program was used to spy on the Airbus negotiations with Saudi Arabia, a contract eventually won by Boeing ([EP14]).
    • Another important factor is the sheer power that some large tech companies have because of the size of their customer base.
  • Breaches and espionage may be untraceable for a long time
    • One of the differences between physical security and digital security is that in the latter case, there often are no “smoking guns”. Attackers can penetrate systems and infrastructures unnoticed.
    • Preparation for cyber warfare may require to pre-emptively break in at soon-to-be enemies. To execute certain types of attacks, you need to already be in the enemy network.

Exemplary case study

A compelling case study where all of these factors come together, is a real-life event that is an exact copy of the Hollywood classic James Bond – Tomorrow Never Dies. In the movie, media mogul Elliot Carver lures the United Kingdom and China into war by sending fake GPS signals to a British navy ship. The ship thinks it is in international waters, but actually drifted into Chinese waters.

In the summer of 2019, Iran very likely (but unproven) lured a British oil tanker into Iranian waters by sending fake GPS signals. This allowed the Iranian military to easily capture the oil tanker, without leaving their own waters. This happened two weeks after British forces captured an Iranian oil tanker near Gibraltar accused of violating sanctions on Syria ([Hugh19]).

The use of technology disruption by Iran, likely with the help of Russian technicians, was actively used in geopolitical tension between Iran and the United Kingdom. In line with all hybrid types of warfare, there is no compelling proof Iran actually messed with these GPS signals, and this will likely never be proven.

How to respond to this new reality

Companies in the Netherlands and the European Union will likely experience the effects of this new reality in one way or another. Albeit as collateral damage of an attack on certain industries or economies in general, by being (overly) regulated by authorities, or by being too focused on adoption rather than securing new technologies.

We present four predictions and response strategies that will be important for 2020.

Prediction #1 The speed of technology adoption is challenged by new attack surfaces

The previous buzz around new technologies (cloud, AI, blockchain, IIoT and the like) resulted in rapid adoption in businesses. There is an increasing number of large corporates implementing these technologies to stay at the forefront of their industry, not to be disrupted. However, many of these technologies dig holes in the classic technological wall around their corporate castle (the old-fashioned “Fort Knox” doctrine). The Chief Information Officer is losing grip of data and information in his/her ecosystem, while uncertainty in the sense of IIoT (sensor) equipment and AI/ML algorithms is marching in. Lately, these moves are orchestrated by the newly introduced Chief Digital Officer, who often surpassed the CIO in corporate hierarchy.

Let us be clear: we don’t necessarily see a risk in lowering the castle drawbridge with the introduction of these technologies. They can be well-guarded when the classical security and control regime has been part of the project adopting these technologies. It is key to involve technology and security experts from the start (e.g. business case development, business readiness assessment and tool selection) to implementation (e.g. security configuration, security testing and security control implement) and finish (e.g. post-implementation assessment, continuous security control monitoring and periodic security assessments).

Prediction #2 Increasing debate over foreign investments and sales

There is an increasing national and international interest in foreign investments in technology companies. European technology companies will be under scrutiny both for direct investments as well as investments in their key suppliers or clients. International treaties and arrangements will be increasingly exploited for geopolitical power play, especially between the US and China (see the ASML case mentioned in the introduction). Furthermore, we’ll see a call for supply chain transparency where manufacturing and high-tech companies are expected to provide insight or even assurance over their full supply chain.

For now, the Netherlands has taken a relatively neutral stand in the upcoming Technology Decoupling between the US and China. The European Union seems to move towards Member State sovereignty on this topic, leaving each country to decide for themselves in e.g. the debate over Huawei. However, we strongly encourage Dutch corporates to start constructing a scenario that is relevant to their industry: what if a Chinese investor is interested in taking over a (major) part of our company ; what if my key supplier receives large investments from Russia or is entirely acquired by a foreign state-funded investment vehicle; what if my key supplier moves its business to countries that heavily invested in the Chinese Belt and Road Initiative ; what if the authorities halt my exports to Iran, China or Russia ; etc.

Prediction #3 Rise of online borders

Large technology companies (usually based in the US) have been under scrutiny by the European Union for a while. The recent announcement of the EU Data strategy and policy options for AI (d.d. 19 February 2020) provides a lot of clarity for what is to come. The EU will regulate “high-risk” AI applications, scrutinize market power of large digital platform companies and increase EU data protection efforts. Furthermore, it will further stimulate the development and deployment of lower-risk AI applications. This will have its effects on Dutch and EU corporates in the sense that we will see increased pressure for in-country or in-EU data processing; potential exclusion of overseas technologies; and strong regulation of privacy-invasive AI applications such as facial recognition. This will increase difficulty to work cross-border (especially outside the EU).

While there is no consensus over the AI strategy of the EU (which is now up for consultation with European citizens, Member States and other stakeholders), large corporates need to closely follow and interpret the work published by the EU relevant for their industry. Most of the published policy options for AI are not new and rely on the work of the High-Level Expert Group AI (HLEG AI). Their publications can be very helpful for relevant scenario planning. We especially encourage corporates with an EU-only presence to look closely at their key (IT) providers and identify potential problematic providers or services. With increased investments from Member States and the EU in local AI development and deployment (in 2020 expected to be close to 20 billion euro), we will likely see EU competitors to invested US companies emerge soon.

Prediction #4 Increasing blend between cyber-criminal gangs and state-sponsored hackers

On the technological side, we see a more increased blend between hackers of cyber-criminal gangs and state-sponsored hackers. These gangs are probably hired as mercenary armies by states to inflict harm to or gain intelligence from their targets. This means that we see more sophisticated attacks, leveraging nation-state cyber capabilities delivered by crooks. Furthermore, these attacks will likely be more covert: currently, these gangs immediately announce themselves to e.g. retrieve bitcoins as ransom, while their new bosses demand them to silently disrupt enemy infrastructure or steal secrets.

All cyber experts agree on one thing: detection and response are more important than (solely) preventing cyber incidents. Currently, in this light, Dutch corporates are not in good shape ([WRR19]). We encourage them to:

  • understand themselves (e.g. through crown jewel assessments) and their threat landscape (e.g. through threat assessments);
  • validate their preventive capabilities (e.g. through penetration testing);
  • but especially detective capabilities (e.g. through red teaming assessments);
  • and to prepare for a breach (e.g. through (technical) cyber incident simulations).


We have discussed the fourth industrial revolution, its connection with digitalization and how it will change the world we know. Cyber security will play a dominant role in this revolution, with a wide range of interested parties and varying motivations.

We know that China is moving forward to soon become the world leader of the fourth industrial revolution by its vast investments in the digital and physical Silk Road. America however is not ready to give up this hegemonic position soon, and therefore Europe should avoid being squashed between America First and the Chinese Dream.

Europe has been a soft power nation in a hard power world, pushing its opinion in the form of legislation. Europe has been very successful, for example with the GDPR which is now the de facto global standard. The recently published EU Data strategy and policy options for AI is exactly aimed at this.

To put it in the words of the President of the European Commission, Ursula von der Leyen: “Today we are presenting our ambition to shape Europe’s digital future. It covers everything from cybersecurity to critical infrastructures, digital education to skills, democracy to media. I want that digital Europe reflects the best of Europe – open, fair, diverse, democratic, and confident.” Let’s all work together to reach this goal.

This article is based on an article published by KPMG the Netherlands and the Clingendael Institute as a preparation for the Dutch Transformation Forum. The complete article can be found here:


[ANV18] Dutch National Network of Safety and Security Analysts (ANV) (2018). Hybrid Conflict: The Roles of Russia, North Korea and China.

[Bron13] Bronk, C. & Tikk-Ringas, E. (2013). Hack or attack? Shamoon and the Evolution of Cyber Conflict.

[EC19] European Commission and HR/VP contribution to the European Council (2019). EU-China – A strategic outlook.

[EP14] European Parliamentary Research Service (2014). The ECHELON Affair: The European Parliament and the Global Interception System Study.

[Gibb15] Gibbs, S. (2015, June 11). Duqu 2.0: computer virus ‘linked to Israel’ found at Iran nuclear talks venue. The Guardian. Retrieved from:

[Grap19] Grapperhaus, F.B.J. (2019, 18 April). Kamerbrief over maatregelen tegen statelijke dreigingen.

[Harv19] Harvey Nash, KPMG (2019). CIO survey 2019.

[Holm19] Holmes, O. (2019, September 12). Israel accused of planting spying devices near White House. The Guardian. Retrieved from:

[Hugh19] Hughes, C. (2019, July 20). Iran tanker crisis: MI6 probe link to Putin after British ship is seized. The Mirror. Retrieved from:

[Lee18] Lee-Makiyama, H. (2018). Stealing thunder: Cloud, IoT and 5G will change the strategic paradigm for protecting European commercial interests. Will cyber espionage be allowed to hold Europe back in the global race for industrial competitiveness? (No. 2/18). ECIPE Occasional Paper.

[Modd19] Modderkolk, H. (2019). Het is oorlog, maar niemand die het ziet. Amsterdam: Podium.

[Ramo09] Ramo, J.C. (2009). The age of the unthinkable: Why the new world disorder constantly surprises us and what we can do about it. London: Hachette.

[SCoC17] State Council of China (2017). New Generation Artificial Intelligence Development Plan.

[Shei19] Sheikh H. (2019). Cycles and dynamics of change.

[UNSC19] United Nations Security Council (2019). Report of the Panel of Experts established pursuant to resolution 1874 (2009). Retrieved from: S/2019/691

[VNO18] VNO-NCW (2018). AI voor Nederland: vergroten, versnellen en verbinden.

[Wijk19] Wijk, R. de (2019). De nieuwe wereldorde: Hoe China sluipenderwijs de macht overneemt. Amsterdam: Balans.

[WRR19] Wetenschappelijk Raad voor het Regeringsbeleid (2019). Voorbereiden op digitale ontwrichting.

[Zett14] Zetter, K. (2014). Countdown to Zero Day: Stuxnet and the launch of the world’s first digital weapon. New York: Broadway Books.

Security principles for DevOps and cloud

Many organizations across different sectors are increasing their digitization efforts, ultimately to deliver their products and solutions faster to the market. The technology, quality and, especially, the security functions struggle to keep up with the pace. While traditional organizations have difficulties moving their heterogeneous IT landscape to the cloud, less traditional technology-oriented companies struggle to embed security as a process in their development lifecycle, as they perceive it to impair their valuable time to market. Rather than implementing security as a stage gate at the end of the development and operations lifecycles, it should be a continuous process through the value delivery stream. This should ultimately help position the security function as a business enabler.


Agile Methodology, Continuous Integration, Continuous Delivery, SecDevOps, DevSecOps. All terms that describe a change in mindset towards bringing (software) ideas to customers faster. Lately, organizations are adopting ways to increase the speed of development even further by composing teams with both development and operation engineers (DevOps teams) ([Brum19]). This rapid deployment of changes to production environments has security practices struggling to keep up with the pace. This, combined with the cloud transformation of the past decade, has led to rethinking security principles and dealing with risks ([Stur12], [Beek15]). However, these are mostly focused on the risks that are associated with bringing your IT and data to the cloud, rather than maintaining a sufficiently secure state of solution delivery in these rapidly changing environments.

This article addresses some of the key challenges that organizations face when applying this shift in mindset towards DevOps, and how to deal with those in a cloud-enabled world. Rather than focusing on how to directly control the technical risks, we will address how to apply security principles in the development process to deliver value faster while meeting security, privacy, and compliance needs.

In order to arrive at these security principles, we will first address the DevOps principles and how, in combination with cloud technology, these can be used to achieve an unparalleled time to market. We will then address how a secure state can be achieved and maintained with a combination of DevOps principles and elements of the application development lifecycle.

Organizing DevOps

DevOps is all about organizing three basic principles which we will briefly outline below ([Hütt12], [Kim14], [Kim16]):

  1. The principle of flow. This principle emphasizes the performance of the system, instead of the performance of a specific subprocess, department or individual contributor (e.g., a developer, system administrator). It should focus on all value streams that IT delivers, from requirement identification, development, testing, transition to operations and delivered to end-users and customers.
  2. The principle of feedback. This principle is about getting feedback as soon as possible in the software development lifecycle. Developers and practitioners refer to this as ‘shifting left’: the earlier you detect an error or issue, the more inexpensive it is to fix the issue. The goal is to shorten and increase feedback loops so necessary corrections can be continually made.
  3. The principle of learning. This principle focuses on creating a culture for continual experimentation, taking risks and learning from failure. ‘Fail fast’ and ‘fail often’ are key terms to this principle, yet very hard to get right in practice. This also includes making the team responsible for their successes and failures and providing enough means to grow.

Many organizations have adopted this way of working. However, the principles themselves do typically not provide practical recommendations on how to organize secure development processes. Research has been conducted on applying these principles in practice, for example through implementing ‘Continuous Integration’ ([Fowl16], [Duva17]) and later ‘Continuous Delivery’ ([Humb10]). Also, organizations have embraced agile development processes, such as ‘SCRUM’ ([Schw02]), following different maturity levels ([Lepp13]). Although these principles provide some guidelines, we still see that many organizations struggle to embed security in the development process, to become so-called ‘Secure by Design’.

Cloud security opportunities

Utilizing cloud technologies and shifting towards a DevOps organization go hand in hand. New cloud developments like serverless computing and Infrastructure as Code have impacted organization’s security landscape in the same way DevOps organization has by blurring lines between the development and operations of solutions. As a result, organizations now have a myriad of opportunities to use new (security) capabilities and technologies. The following (non-exhaustive) list provide an overview of principles that could help improve cyber security (see Table 1).


Table 1. Cloud principles that can help improve security in a DevOps environment. [Click on the image for a larger image]

Serverless computing

With the introduction of the cloud, the respective cloud providers are responsible for the (security of the) services that they offer which reduces the total ‘security surface area’ that the organization’s security experts need to manage themselves. The usage of ‘Infrastructure as a Service’ (IaaS) and ‘Platform as a Services’ (PaaS) patterns allows organizations to better focus on their key strengths, rather than managing the complexities of hardware and software. Serverless computing concerns this transfer of responsibility for part of the security surface in solution development, such that users can focus on writing and deploying code. This helps reduce the risks associated with managing infrastructure components, such as data centers, virtual machines, databases and configuration of (network) components. For example, KPMG Digital Risk Platform’s architecture is entirely constructed with serverless components, drastically reducing the overhead of patching and configuration management.

Infrastructure as Code

This is the process of managing and provisioning software and hardware configuration through machine-readable definitions, rather than error-prone manual and interactive provisioning through configuration tools ([Arta17]). Infrastructure as Code can be used for platform as well as infrastructure components. All major cloud providers support this process. Another advantage is that the (changes to) definitions can be treated as code. This allows for managing changes to the infrastructure in the same way, with the same tools as managing changes to regular (application) code, using known software best practices for designing, implementing, and deploying applications infrastructure. Deployments are therefore less error-prone, the environment is more homogeneous and security configurations can be managed as part of the regular secure development lifecycle.

Security centralization

Cloud environments allow for security capabilities such as encryption, key management, privileged identity management, auditability and security monitoring to be centralized. Although organizations tend to view this as an increased risk (i.e. “one place to rule them all”, [Moll19]), it provides more visibility, opportunities for automation and simplicity. Leveraging the economy of scale, every security requirement brought forward by another cloud customer can improve the security of the overall cloud provider, as these provides have a major incentive to keep the cloud secure. Centralization in a cloud also allows to connect platforms like Azure DevOps, Gitlab and Atlassian to easily track work, collaborate on code and integrate continuous deployment. This greatly improves transparency and allows development and engineering teams to focus on the DevOps principles.

Cloud and DevOps security challenges

On the other hand, we also see organizations struggle with the overwhelming possibilities that cloud environments and new development methodologies offer in relation to security. Table 2 lists some examples.


Table 2. Typical security challenges while implementing DevOps. [Click on the image for a larger image]

Security as a stage gate

We see that organizations are trying to put all risk-mitigating activities at the end of the ‘development process’, leading to feedback that is obtained only at the end of the software development lifecycle. This decreases the flow as it will take longer to follow up on the identified bugs and issues than when they would be detected earlier in the development process. In traditional companies with many legacy systems this results in delays in IT projects, as the security function of the organization is overwhelmed with activities to complete the security ‘stage gate’ at the end of the project.

Get overwhelmed with the output and feedback of security solutions

Although cloud environments provide new tools and methodologies, we see organizations struggle to use them adequately. All three major cloud providers (Amazon, Google, Microsoft) provide many security solutions, ranging from DDOS protection to threat and vulnerability monitoring. However, more monitoring capabilities do not necessarily improve security. As these solutions typically report many non-compliances, potential risks and incidents, activities such as triage, false positive reduction and follow-up require time and effort. Determining what is actually important requires a sound threat model. With many potential risks reported by these solutions, companies fail to determine the actual business risk of potential security issues and vulnerabilities, thereby typically focusing on the wrong corrective actions and behaviors. Some examples are:

  • Failure of the solutions to understand business context, reporting vulnerabilities in development environments that are segmented from the production environment as ‘critical risks’;
  • Failure to understand usage patterns and behaviors, such as reporting the shared use of test accounts as impersonation attacks;
  • Failure to understand the application or environment context, such as reporting licenses that are used in test tools (which are not distributed to end-users) as ‘policy violations’.

We have seen clients that struggle with this output volume, particularly when they have many cloud security solutions that run frequently. Typically, these solutions report thousands of potential (high risk) security issues, while only few of them really affect the business continuity.

Failure to keep up with the speed of business

We see IT functions struggle with the opportunities cloud environments provide. Due to the lack of business understanding, business and IT goals are diverging. Where IT functions try to keep the application portfolio to a minimum, with increased control, business users often procure and use their own IT. They praise the flexibility, frequency of functionality updates and possibilities of (cloud) Internet services. A credit card is usually enough to buy an application or server. An example is dealing with so-called ‘shadow IT’ ([Kuli16]): applications that are used by business users that are not or very little under the control of IT functions. Examples are marketing outings via servers not controlled by IT, sending sensitive to cloud storage providers and connecting business identities with third party applications.

Use ‘time to market’ as an excuse

We have seen organizations that are quite capable of implementing the principle of flow but lack the appropriate checks and balances for security. Typically, they perceive time-consuming and compliance-driven security controls as an impediment to their time to market and execution speed. Frequently delivering new versions to end users is considered valuable. However, it must be done in accordance with a clear risk appetite and a sound associated threat model. We acknowledge that trying to cover all potential security risks is time-consuming, and often also undesirable as it always comes with a tradeoff (e.g. with usability). For example, patching a specific API endpoint that allows for SQL injection can take quite some developer resources. If this endpoint is only reachable by administrators and only after a two-factor authenticated login, it greatly reduces the attack surface and probability of a successful exploitation by an unauthenticated user. If this is in line with the company’s risk appetite, it can be decided not to patch and continue the deployment. Three important elements play a role here:

  • Major stakeholders, such as the board of management, should have set a sound risk appetite.
  • DevOps teams should be aware of this risk appetite and how they can apply its boundaries in practice.
  • DevOps teams and stakeholders should be aware of the (potential) risks that are present and the risks they would like to take. Organizing risk management is beyond the scope of this article, we refer to other resources that are available (such as [Baut19]).

Organizing security capabilities in the development lifecycle

Organizations can apply security principles in the development and operation processes to deliver value faster, while also leveraging the benefits of cloud transformation. As cloud infrastructure is commonly continuously developed, a shift from manual processes to automated controls is required in order to maintain a consistent security posture while still maintaining frequent value delivery.

Furthermore, the infrastructure itself also changes. Application development produces not only an application, but also the underlying infrastructure and configuration thereof. This includes e.g. virtual machines, firewalls, databases, etc. Developing new infrastructure introduces the requirement to enroll in other security capabilities like monitoring, networking (VNet, WAN, VPN, DNS) and delivery (CDN, Load balancers, Application Gateways).

We will, based on the phases in the DevOps process, discuss how to embed security principles and capabilities. Figure 1 fairly represents common steps in secure development lifecycle process.


Figure 1. The DevOps lifecycle ([Otey18]). [Click on the image for a larger image]


As many of the security issues originate from human failure, it is important to enable DevOps engineers with the right knowledge and tools to make risk decisions as early in the software development lifecycle as possible, i.e. during the ‘plan’ phase. An important aspect is to make the team itself responsible for security, not just the IT security function or team in the organization. This requires that developers are allowed to take security training. Also, the (traditional) security function should provide modern product management and engineering disciplines (such as Product Owners) with advice and recommendations during the planning of functionality.

In addition to the functional requirements set for an application iteration, it is also necessary to define non-functional requirements, such as:

  • performance and availability requirements;
  • code quality and license requirements;
  • data confidentiality and integrity requirements; and
  • personal identifiable information requirements.

All of these require the stakeholders and business owners to agree on the risk appetite of the software, based on threat modeling, usage of the software, reputation, and the type of data being stored. This is all about reviewing risk scenarios. For example, fixing a SQL injection vulnerability in a part of the application only accessible to functional administrators could have less priority than replacing code that introduces a license infringement, as legal and reputational damages can directly impact the business.


A very important step during the ‘Plan’ and ‘Code’ phases is the validation of requirements by the developers. In agile development methodologies this is called the ‘refinement’: meetings in which developers demonstrate their understanding of the requirements (e.g. features, product backlog items) to be implemented to the product owner and/or business analysts. This is an ideal place to discuss potential security and privacy aspects/impact of the (non-)functional requirements, as new functionality also always introduces additional attack surface. Discussion thereof fosters ownership of issues and makes all parties involved work towards solutions that are acceptable to all. This should help developers gain a better understanding of the context and ‘shift left’ security activities: during the implementation of code they need to be informed about potential issues (such as security bugs) as early as possible.

Build and Test

In order to uphold the (non-)functional requirements set out in the previous stages of application development, testing and failing to pass tests must happen as early as possible in the development process. To work towards this goal, various types of tests can be executed in an automated fashion, both during and after the ‘build’ phase.

  • Static Application Security Testing (SAST) and Dynamic Application Security Testing (DAST) ([Gold19]) identify and address issues in proprietary software, such as bugs, potential security flaws, code coverage, code quality and technological debt.
  • Software Composition Analysis (SCA) covers the identification of open source components with known vulnerabilities and potential license infringements in imported libraries.
  • Implementing Security Monitoring in critical flows of the application, as identified in the requirements and design phases to improve incident response, and forensic capabilities as well as auditability of the application.

Also, developers should be encouraged to participate in offensive security activities against their own environment and developed code. The understanding of circumstances that introduce security vulnerabilities can help developers anticipate and solve security issues beforehand. Usage of security tools, such as OWASP ZAP, Burp, Nikto and Metasploit by developers is encouraged in order to facilitate the automation of security testing in the development process. But remember: “a fool with a tool is still a fool”.

In the ‘test’ phase, the development team validates if the software build matches the requirements. A so-called ‘Pull request’ is one of the most important security measures in this phase, as it brings together all elements of performing risk assessment: product backlog items or user stories with the requirements, trackable/auditable work through work items, code and commit messages, results of the tests and Static Application Security Testing (SAST). We have depicted an example flow of the Pull Request in the picture.

A crucial element of the pull request is a peer review by another developer. However, to be properly executed, the IT staff involved in the pull request should have sufficient security knowledge to make the decision. Only if all criteria are met, will the newly developed code be propagated towards the main branch (i.e. ‘master’ in Figure 2).


Figure 2. Example approval process for deploying code to production, used in KPMG’s Digital Risk Platform. [Click on the image for a larger image]


The final step before pushing changes to production environments is the release phase. This includes the final security review, in which the risks associated with the deployment of changes are assessed by the business, given the defined (non-)functional requirements and test results.

It is important that the business stakeholders can make an informed security decision. The attack surface should match the threat model and risk appetite set forth during the ‘requirements’ phase. Any residual risks of operation should be addressed through the definition of an incident response plan.

After approval, the release and release approval are archived to help create an auditable trail from the definition of requirements to production release.


When a release is approved by the relevant stakeholders, the changes should be pushed to the production environment. This is called the ‘deploy’ phase. A great way to minimize the amount of errors, is to maximize automation. Modern cloud environments support the configuration of automated release pipelines that build upon the principle of Infrastructure as Code. This ensures that the infrastructure required to run the application is launched and configured through predefined scripts, which in itself can be treated like any other application change. An important security aspect during the ‘deploy’ phase is to ensure environments (such as development, test and production environments) are separated through the usage of ‘key vaults’. These vaults store the secrets (such as password and keys) of the application and underlying infrastructure and can be automatically populated and used in a deployment pipeline.


The operate phase involves maintaining and troubleshooting applications in production environments. The DevOps teams maintain roles such as ‘designated responsible individuals’ and ‘site reliability engineers’ to ensure system reliability, high availability and performance while reinforcing security. They try to identify issues before these affect the end-user experience and respond to issues quickly when these occur.


Given the defined (non-)functional requirements, operational and monitoring use cases can be defined. By implementing monitoring, a production-first DevOps mindset is fostered and impact on end users can be limited by taking proactive actions. Impact can be evaluated through observation, testing, analysis of telemetry, and user feedback. This then feeds the ‘plan’ phase of the next iteration of product development in the DevOps process.


Organizations need to balance security activities and budget with time to market and user friendliness. Cloud transformations can help with embedding security principles and solutions, especially if these are implemented through the DevOps principles. Moving to the cloud is also a good opportunity to embed cyber security in daily processes such that organizations get more ‘secure by design’. We have discussed common pitfalls in implementing cloud security solutions and have provided security principles and activities that can be embedded in (agile) development processes. The key take-away is to make the DevOps engineers feel responsible for the security decisions they take during development and provide them with the means and mandate to do so. This should help organizations to better balance security and usability, while still maintaining the ever increasing need to deliver value faster.


[Arta17] Artac, M. et al. (2017). DevOps: Introducing Infrastructure-as-Code. 2017 IEEE/ACM 39th International Conference on Software Engineering Companion (ICSE-C), 497-98.

[Baut18] Bautista, M.C. & B. Krutzen (2018). Digitization of Risk Management. Compact 2018/2. Retrieved from:

[Beek15] Beek, J.J. van (2015). Assurance in the Digital World of the Future. Compact 2015/Special. Retrieved from:

[Brum19] Brummelen, J. van & T. Slenders (2019). Modern Software Development. Compact 2019/2. Retrieved from:

[Duva07] Duvall, P.M. et al. (2007). Continuous Integration: Improving Software Quality and Reducing Risk. London: Pearson Education.

[Fowl06] Fowler, M. et al. (2006). Continuous integration. Thought-Work, 122, 14.

[Gold19] Goldstein, A. (2019, May 23). SAST vs. SCA: It’s Like Comparing Apples to Oranges. Whitesource Software. Retrieved from:

[Howa06] Howard, M. et al. (2006). The Security Development Lifecycle. Redmond: Microsoft Press Redmond.

[Humb10] Humble, J. et al. (2010). Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation. London: Pearson Education.

[Hütt12] Hütterman, M. (2012). DevOps for Developers. New York: Apress.

[Kim14] Kim, G. (2014). The Three Ways: The Principles Underpinning DevOps. Portland: IT Revolution Press.

[Kim16] Kim, G. et al. (2016). The DevOps Handbook: How to Create World-Class Agility, Reliability, and Security in Technology Organizations. Portland: IT Revolution.

[Kuli16] Kulikova, O. (2016). Cloud Access Security Monitoring: To Broker or Not To Broker? Compact, 2016/3. Retrieved from:

[Lepp13] Leppanen, M. (2013). A Comparative Analysis of Agile Maturity Models. Information Systems Development, 329-343.

[Moll19] Mollema, D. (2019). Syncing yourself to Global Administrator in Azure Active Directory. Fox-IT. Retrieved from:

[Otey18] Oteyowo, T. (2018, April 12). DevOps in a Scaling Environment. Medium. Retrieved from:

[Schw02] Schwaber, K. et al. (2002). Agile Software Development with Scrum. Upper Saddle River: Prentice Hall.

[Stur12] Sturrus, E., Steevens, J. & Guensberg, W. (2012). Access to the cloud. Compact 2012/0. Retrieved from:

[Tunc17] Tunc, C. et al. (2017). Cloud Security Automation Framework. 2017 IEEE 2nd International Workshops on Foundations and Applications of Self* Systems (FAS*W), 307-312.

Purple Team: drive defense with offense

Purple team exercises are a fast way to improve your security monitoring function. By combining defense and offense in purple team exercises you can measurably improve your security monitoring function faster and less expensively than segregating both functions.


Building an effective security monitoring function is a challenge. The attack surface is becoming more and more complex to manage, understand and even be aware of. However, an effective security monitoring function can be improved faster by utilizing purple team exercises.

Organizations today want to see immediate value as well as longer term strategies and solutions. The type of collaboration described in this article will allow you to see both immediate results and long-term improvements come to fruition, maximizing the value of the internal or external services you pay for.

Whether you have internal teams or utilise external resources for offense and defense functions, if you are not performing any form of purple team exercises it is highly likely you are not getting the most value out of the expertise you are paying for. Want to bolster your security monitoring function fast? Then start with purple team exercises.

What are red, blue and purple teams?

First some background on blue and red teams. The terminology originates from military terms for defense (blue) and offense (red). A blue team in the context of cyber security is responsible for defending against cyberattacks and improving security posture within an organization. This includes implementing preventative and detection controls and responding to security incidents and alerts. This function can be internal, outsourced to a third party, or a hybrid of both.

A red team performs activities to emulate an attacker’s behavior during red team exercises. The purpose of a red team exercise is to simulate a realistic attack based on the known techniques of threat actors. The timing and goal of the exercise are not shared with the blue team, to make the simulation more realistic. These exercises are usually performed annually, can take several months to complete, and can be performed by an internal or external team to the organization (some organizations also have a regulatory requirement to complete an external red team exercise).

It is only after the red teaming exercise has finished and the cyberattack simulations have been completed, that the red and blue teams start interacting with each other. Then both teams typically discuss how they experienced the attack simulation, where the blue team will indicate which indicators of compromise they have found, and the red team will provide a detailed story line of all actions performed. Often, the red and blue teams have different reporting structures due to both teams being segregated either internally or with one function being external. This way of working could increase a feeling of competitiveness that moves away from the end goal both teams share – strengthening the security posture of the organization.

To remove the feeling of competitiveness and focus optimally on strengthening the security posture of the organization, you could consider performing a purple teaming exercise instead. A purple team brings the traditionally segregated red and blue teams together for an exercise where they work together.


Figure 1. Combining the strengths of red and blue teams to form a purple team. [Click on the image for a larger image]

The format can vary from sitting in a room together and going through attack behaviors, to a red team conducting an exercise with the awareness of the blue team and a red team representative helping and giving tips to the blue team. They are ideally conducted regularly (once a quarter or every six months) to ensure continuous improvement and to ensure changes to the environment are considered. Collaborating enhances shared learning and maximizes the effectiveness on both sides. For organizations who are not required to carry out a red team exercise or who have a low security monitoring maturity level, purple team exercises can be a standalone option to gain immediate value and long-term action plans.

Another common method to improve the security posture is penetration testing. This is something most organizations already are familiar with. It is an essential part of vulnerability management and thus essential to the security of an organization. However, penetration testing is completely different from red vs blue teaming and purple teaming. Penetration testing focusses on finding as many vulnerabilities as efficiently as possible. Typically, this includes the use of very noisy tools, such as mass vulnerability scanners. Remaining undetected is not taken into account during penetration testing, and the focus is purely on the preventive measures. Unfortunately, penetration testing and red, blue and purple teaming are regularly confused with each other, while each serves a very different purpose. Red vs. blue teaming and purple teaming do not focus on finding as many vulnerabilities as possible, penetration testing therefore remains an essential part of the security program, even when organizations have incorporated mature red vs. blue teaming and purple teaming exercises. The characteristics of each of these methods are shown in Table 1.


Table 1. An overview of the characteristics of red teaming, purple teaming, and penetration testing. [Click on the image for a larger image]


Many organizations lack the internal resources to holistically implement defensive and preventative controls to adequately and robustly respond to the results of red team exercises. The result is oftentimes that the next time a red team exercise is repeated, many of the same or similar findings persist. Some of the challenges we have seen repeatedly at organizations in different industries and sizes are described below, along with the benefits of working together.

Resource and skill constraints

The blue team is traditionally a much-stretched resource and firefighting is commonplace. Ensuring you respond to incidents and alerts in an appropriate timeframe, as well as implementing new use cases, writing and updating playbooks, implementing or championing new preventative controls among many other tasks can be overwhelming. Even in organizations where the budget exists to have separate teams for some of these functions, the skillsets for this type of work are scarce worldwide with an estimated 3.5 million unfulfilled cybersecurity jobs by 2021 according to Cyber Security Ventures ([Morg19]). Onboarding the relevant log sources and adhering to organizational change procedures for implementation can also severely limit the blue team’s ability to respond adequately, and often results in quick fixes and priorities focused on what is possible, rather than what should be prioritized.

In a purple team scenario, efforts are focused with precision on the task at hand. The red team is also able to help the blue team prioritize the selection of use cases and even the vulnerabilities or preventative controls to focus on first, based on their experience.

Limited understanding of red team findings

A static report from a red team exercise needs to be interpreted and this can lead to misunderstandings or assumptions. This could lead the blue team to implement inadequate controls to resolve the identified issue.

When the teams are sitting side by side, it is more interactive. Everything is shared with the blue team at the level required to identify exactly what needs to be accomplished to adequately respond to the finding from the red team.

Unable to test controls implemented

It can be difficult to recreate attack patterns to test implemented controls because the blue team may not have the required skills to do so. The blue team may not have permission to do this on live systems, and the simulators may not be adequate to test everything. This is because they only cover specific behaviors that may not exactly match what was tested in the red team exercise and they may need to be adapted to the environment. Without the ability to simulate the behavior to recreate logs it stunts the blue team’s ability to test strengthened defenses and ensure they are adequate. Scarce resources can be an additional hindrance to ensure that new controls are effectively tested both initially and on an ongoing basis, which is also critical.

With the red team working together with the blue team in a purple team exercise, this issue is resolved. As soon as an additional control is implemented the exact same behavior (patterns) can be emulated again, and as many times as needed, until the control adequately prevents or detects it. With regular purple team exercises you can retest your current controls to ensure they still function as expected. Combining the two functions in these purple team exercises also assists with the resource issue, because while they will learn more about it, the blue team does not need to learn the skills and takes the time to recreate attack behaviors themselves.

Tunnel vision

The risk exists that quick fixes are implemented but a holistic approach is not applied. For example, a use case can be built to detect a particular behavior, but was a preventive control also considered? Creating use cases based on single indicators of compromise (IoCs) is common and often not effective in the longer term. This can be due to lack of time to properly consider a solution, lack of skillset and lack of understanding of how an attacker works. It could also be that due to the organizational structure, the SOC (Security Operations Centre) does not have the mandate necessary to change processes or make an organizational change, and so instead does what it can with what is available to it. Quick and dirty solutions are therefore sometimes implemented, which will not sufficiently cover the behavior of the red team or attacker if slightly modified.

The red team can articulate and demonstrate why implementing a tunnel vision control is less valuable. Working together promotes a better understanding of how attacks work, and how they can best be prevented or detected.

Visibility of red team footprints

When the red team report is issued, the first thing the blue team wants to do is to look at each scenario and determine where preventative and detective measures were insufficient, and which measures could be put in place based on feedback. Most often you need to look at any footprints generated by the activity of the red team in your environment to formulate a plan on how to prevent or detect the activity. However, these may no longer be available once the report actually reaches the blue team. Red team exercises are not interactive while being conducted and a report is issued after the exercise. Event logs will usually have already rolled off, and even if you had the logs you are required going to your SIEM (Security Information and Events Manager), retention periods can often be short and therefore no longer available.

Working side by side together at the same time resolves this issue entirely.

Success is incorrectly measured

When the two functions work independently, it can foster a culture of competitiveness. This does not encourage a mentality of working together and instead feeds a rivalry between the two teams. Both teams can potentially be reluctant to share their techniques with each other, to ensure they “win” next time. What does this mean? Potentially the expertise paid for is not being leveraged in the most efficient way.

The end goal and incentive for both teams must be how much they have strengthened the security posture of the organization. Make this measurable. For example, use MITRE’s ATT&CK framework ([MITR19]) and create a heat map to show the strength of your techniques used by threat actors targeting your industry. Once the purple team exercise is complete, update the heat map to demonstrate improvements. In addition, leverage a use case framework such as MaGMa ([Os17]) which will calculate a percentage of detection coverage which can then be used to measure improvements of detection mechanisms.

How to get started – utilizing purple teaming on your transformation journey

Many organizations are adopting more “Agile” ways of working, and as such have a desire to see immediate results and benefits. This method focusses on immediate and clear value as well as actionable implementation plans at each stage of the transformative process.

Who should be part of a purple team?

To get the most value out of these – usually – time-boxed exercises, you should ensure that representatives from each aspect of the blue team are involved; including incident response, security tooling engineering, network engineers and vulnerability management. You will also need representatives from the IT operations team, i.e. Windows/Active Directory teams, or at the very least they should be aware of the exercise and be able to make changes as needed. Furthermore, make sure that other stakeholders such as decision makers, change management and risk departments are aware and support the purple teaming exercise. This can help to get things implemented during the actual exercise and allow changes to be expedited if required. Generally, it will be a more blue team exercise due to the workload for the blue team generated by the red team.


Working together

A purple team exercise is a joint mission between the red and blue team to improve the security monitoring function of the company through direct collaboration. The format of a purple team exercise can vary. One very effective format is both the red and blue team sitting together in the same room and going through attack behaviors (this can be based on many scenarios, for example threat intelligence based, a previous red team exercise or replaying an actual attack the organization experienced in the past). Once the red team completes an action, the blue team checks if it detected or prevented it. If not, together they work out why, and either fix the issue on the spot and retest or work out an actionable plan to implement the required controls. This method offers direct collaboration.

What to focus on

The most effective areas to focus on are post-exploitation activities. Assume breach and identify the attacker’s actions in your environment. It generally takes too long to focus on initial access in a purple team format, which includes crafting phishing emails or doing extensive reconnaissance on the external network to identify exploitable vulnerabilities.

There should be some level of research completed up front by the purple team, so that the exercise is based on threat-based intelligence specific to your industry.

The red team will work through the selected attack scenario, at each stage of the simulated attack stopping to ask the blue team “Did you see that?” If the answer is no, then they work together to identify why. Are the logs on-boarded, is there a detection control in place, and so on? The same part of an attack can be replayed to test new controls, and also performed in different ways to ensure the detection or prevention mechanism is wide enough to catch slightly modified behavior but also maintains an acceptable false-positive: false-negative ratio.

The focus should be on continuous improvement rather than a one-time exercise. Some examples of themes for purple team exercises are as follows:

  • Emulation of a real attack experienced by the organization
  • Walk through of past red team exercise
  • A tiered exercise, getting more complex with each iteration
    • Tier 1 – Noisy e.g. Common tooling, Brute force, Scanning
    • Tier 2 – More evasive tactics e.g. in memory, privilege escalation
    • Tier 3 – Stealthy: Red team compiling own tools
  • Cloud environment (AWS, Azure, etc.)
  • Test of existing security monitoring detection and preventive controls to determine effectiveness

Measuring success

At the end of the exercise you should be able to measure how your security posture has improved, and also have a plan with actionable items to resolve any issues that prevented implementation during the exercise. As already discussed, having a well-maintained use case framework that maps back to MITRE’s ATT&CK framework ([MITR19]) or a heat map of MITRE’s ATT&CK framework is a good start to measure before and after to demonstrate improvement to management. An example of how such a heat map can look like is shown in Table 2.

It is recommended to repeat the exercise, ideally on a quarterly basis, to maintain momentum, ensure continued improvement and keep up with the constant changes happening in your environment and threat actors.


Table 2. Example to measure security posture based on MITRE ATT&CK framework (green actions are fully detectable by the blue team, orange actions are detectable under certain circumstances, and red actions are not detectable). [Click on the image for a larger image]

Our security monitoring maturity level is low, can we benefit from purple teaming?

If you are still building up your security monitoring function and closing detection and prevention gaps, purple team exercises can be very helpful. For example you can start by having the red team show you the most realistic path to your crown jewels and focus on remediating those gaps. Or you can begin emulating behaviors that create more noise and are easy to detect. The quickest way to improving your maturity level is working closely together with the red team.

Is there still value in a red team exercise when conducting regular purple team exercises?

The answer is yes. The goal of a red team exercise is to give a true indication of how your defenses hold up against an attack with no prior warning. While a purple team exercise is not intended as a replacement to traditional red team exercises, it complements them as an extension of this, either in combination with (pre or post red team exercise) or even independently if you do not conduct red team exercises.


Purple teaming is a very powerful method to improve the security posture of an organization. It promotes collaboration between red and blue teams and increases the learning experience of both teams and of the organization being tested. It is a natural next step when an organization has incorporated vulnerability management processes and wants to simultaneously measure and improve the capability to detect cyber incidents and attacks. Purple teaming is a very worthwhile addition to identifying vulnerabilities (by penetration testing) and to measuring responsive capabilities (by red teaming). Whatever your budget and maturity level may be, you can benefit from leveraging purple team exercises to complement the existing red and blue aspects of your security monitoring function. Both offense and defense have the same end goal – to strengthen the security posture of the organization. So … why aren’t blue and red working together more?


[MITR19] MITRE (2019). Enterprise Matrix. Retrieved from:

[Morg19] Morgan, S. (2019). Cybersecurity Talent Crunch To Create 3.5 Million Unfilled Jobs Globally By 2021. Cyber Security Ventures. Retrieved from:

[Os17] Os, R. van, et al. (2017). MaGMa: Use Case Framework. FI-ISAC. Retrieved from:

Emerging from the shadows

Shadow IT might sound threatening to some people, as if it originates from a thrilling detective novel. In an organizational context, this term simply means IT applications and services that employees use to perform their daily activities and that are not approved or supported by the IT department. With recent developments where many people have to work from home, employees are reaching out to Shadow IT even more. Although these applications can be genuinely valuable and help employees with innovation, collaboration and productivity, they can also open the door to unwanted security and compliance risks. In this article, we take a look at the challenges presented by Shadow IT, and the methods to manage them, so that the risks do not outweigh the benefits.

The shifting challenges of Shadow IT

As bandwidth and processing power have grown, software companies have invested heavily in cloud-based software and applications. Recent research ([Syma19]) suggests that companies largely underestimate the number of third-party applications being used in their organization – with the actual amount of apps in use being almost 4 times higher on average. Some of these applications have been immensely valuable, bringing about digital transformation by speeding up processes, saving costs, and helping people to innovate. They can also point to any software needs: for example, if its employees are signing up for a cloud-based resource management tool, it may show that the company’s existing offerings are not up to the job. However, these applications may bring certain risks and challenges if not managed properly, as outlined below.


  1. Data leaks and data integrity issues

    Data is the main factor to be considered when it comes to the use of unsanctioned or unknown applications to store or process enterprise data. When less secure applications are used, there’s a high risk of potential confidential information falling into the wrong hands. Also, the usage of too many Shadow IT services with data stored across all of them does not benefit the organizational IT portfolio and reduces the value and integrity of data.
  2. Compliance and regulatory requirements

    Legislations such as GDPR, or local regulations for data export, have raised the level of scrutiny and massively increased the penalties for data breaches, especially around personal data. Business or privacy-sensitive data may be transferred or stored in locations with different laws and regulations, possibly resulting in regulatory and non-compliance incidents. There is also a risk of not being compliant with software licensing or contracts if employees agree to the terms and conditions of certain software without understanding its implications or involving the right legal authority.
  3. Assurance and audit

    In an ideal scenario, IT or risk departments could simply run regular audits to identify and either accredit or prohibit specific applications. Practically, it’s an impossible task. It is not unusual for large organizations to run thousands of Shadow IT applications. Yet the IT and risk departments that are trying to reduce this amount, and understand the usage and associated risks, can only handle a few hundred applications per year at best.
  4. Ongoing and unknown costs

    Shadow IT can be expensive, too. When businesses don’t know which applications are already in use, they often end up using the wrong services, or overpaying for licenses and subscription costs. For instance, multiple departments could be using unsanctioned applications to perform their day to day activities. As the usage of these applications occurs under the radar, the organization cannot take advantage of competitive rates, assess security requirements, or request maintenance and support services directly from the application provider that would benefit them.
  5. Increased administrative burdens

    Why can’t corporate IT departments simply solve the problem by banning the use of these applications? They can, but doing so eliminates any productivity gains that the business may be getting, and probably damages employee engagement in the process. Worse still, employees may look for alternative tools that are not on the prohibited list, but may in fact be even riskier.

Solution: Converting Shadow IT to Business Managed IT

We propose the following way forward – to give business users ownership of Shadow IT risk and involve them in the risk management process, instead of leaving it entirely up to IT or risk departments. Applications and services that are known to an organization and have successfully passed the risk management process, are called Business Managed IT. According to [Gart16], Business Managed IT addresses the needs of both IT and the business in “selecting, acquiring, developing, integrating, deploying and managing information technology assets”.

Research ([Harv19]) states that almost two-thirds of organizations (64%) allow Business Managed IT investment, and one in ten actively encourage it. They also found out that organizations that actively encourage Business Managed IT are much more likely to be significantly better than their competitors in a number of areas, including customer experience, time to market for new products (52% more likely), and employee experience (38% more likely). [Forr18] noted that the majority of the digital risk management stakeholders are information security (50%), threat intelligence (26%) or IT (15%) and are encouraged engaging other teams that use the applications to set the Business Managed IT strategy.

We see many organizations taking small steps towards Business Managed IT as a strategy within in the Netherlands and the EU. Companies are increasingly aware of Shadow IT and some of them are already busy discovering, filtering, registering, and risk assessing Shadow IT apps. According to [Kuli16], most of these activities are typically performed manually with some help of automation – typically for blacklisting or whitelisting the apps or running Shadow IT discovery with Cloud Access Security Brokers (CASBs). The actual Shadow IT registration and risk management processes are usually done manually by IT or risk departments using lengthy risk questionnaires. The result is low throughput, resulting in businesses often waiting months or even years before the applications and services they want get the right internal approval.

We believe the future proof model will be more sustainable when the business becomes the actual owner of Shadow IT apps, including the process of their registration, risk management, and risk mitigation. Actual risk questionnaires should be simplified to focus on what’s really important in identifying actual risk and the required mitigating measures. This way, the business can try a new risk role while not being tech-savvy, and IT and risk departments can start focusing on cases where their involvement is really required – for example situations with high-risk apps, where a certain application is better to be run centrally by IT, and not owned by the business. For the lower risk scenarios, business ownership means that apps and services are available without long delays.

Business Managed IT is a strategy and “mind-set”, and the results can be achieved in multiple ways. We encourage organizations to follow what businesses are already doing in their daily work – digitization, automation, analysis – which in the case of Shadow IT risk management means automating the risk management processes with the help of dedicated software. As shown in the maturity graph in Figure 1, not all companies are at this stage – some are still heavily dependent on manual work to run the required processes.


Figure 1. Maturity of Shadow IT risk management. [Click on the image for a larger image]

Setting the groundwork for Business Managed IT

Business Managed IT is an attractive approach but getting the business involved in IT is a new paradigm and should be introduced with care. Implementation requires cultural change and proper communication. The following five steps can help organizations get started:

  1. Define Shadow IT risk ownership by the business and discuss it at a senior level to ensure their support and buy-in.
  2. Set a policy and target operating model for business ownership of Shadow IT, clearly specifying what such ownership means. How will the business work with IT? When will IT and/or the risk department get involved? What are the escalation chains in case there are any delays or uncertainties in risk management process?
  3. Secure involvement of change and communications departments. Focus on increasing business awareness with regard to the upcoming changes. Involve people who are skilled at organizational change management rather than relying on IT or risk experts.
  4. Tackle the Shadow IT monster one step at a time. First, initiate a pilot. Then, deploy the new model with one – ideally more mature – department or operating unit to learn lessons that can be applied during further rollout.
  5. Monitor and adjust. Work closely with the business during the roll-out period. Questions and feedback from the business are good as it helps improve the approach – silence is a bad indicator.

An organizaton’s journey

The organizaton: A global group of energy and petrochemicals companies with 86,000 employees in 70 countries.

The challenge: The organizaton required a significant improvement in their risk management practices around Shadow IT, driven by the vast amount of known Shadow IT applications, the larger unknown services, and audit findings around security and privacy of data stored in such services. At the start of the engagement, the organizaton didn’t have polices or procedures that outlined how employees should use such applications and services, or how the IT and risk management teams could have insight into and control over this usage.

The approach:

  1. Shadow IT policies and procedures were created and approved by senior IT and risk stakeholders.
  2. Business ownership of Shadow IT apps was defined.
  3. The responsibilities of IT and risk management departments changed to monitoring only, with their involvement required only for high risk cases.
  4. Change & communication teams were established to enable the change across the organization. Multiple trainings, videos, train the trainer and other learning materials were created to educate business users about new ways of working.
  5. Pilots and a hyper care period with handholding sessions were used to support any questions during the initial rollout.
  6. The organizatons used KPMG’s SaaS software built on top of Microsoft Azure Cloud to run the newly established process for Shadow IT. The software, connected to the organizaton’s application database, enabled the business to perform risk assessments of identified Shadow IT services, discover relevant risks, and automate the deployment and monitoring of controls. It also provided integrated risk insights to the IT and risk departments.

The value delivered:

Business users conducted over 4,000 risk assessments of Shadow IT applications in one year by completing a simple questionnaire. These assessments resulted in 1000 applications being decommissioned (due to the unacceptable risk exposure for the company, or applications deemed not anymore relevant) and specific controls being deployed based on risks identified through the assessments. Business users appreciated the central database of apps and associated risk ratings that was created as part of this process, which allowed users to look up available apps prior to purchasing anything extra. Businesses reached out more frequently to the IT and risk management departments with thoughtful questions, indicting their increased awareness and ownership of Shadow IT risks.

Valuable benefits beyond risk management

Effective risk management is even more challenging for large international enterprises in today’s context of digital transformation and evolving regulation. Organizations should assess and utilize its risk appetite and, accordingly, allow the business to continue using applications if they are deemed low risk or if there are sufficient mitigating controls in place. When an application poses a high risk, then a decision whether to discontinue its usage or to invest in remediation should be made with involvement of IT or risk management teams.

Business risk ownership and accountability adds an important layer of protection against data breaches and immediately strengthens and facilitates compliance. More importantly, IT becomes an enabler, rather than a department that is viewed as blocking the progress.

To support business ownership of IT and applications, more mature organizations can use automated technologies such as CASBs and the KPMG DRP to automate most of the critical BMIT workflows, such as Shadow IT applications discovery, application portfolio management, organizaton-specific risk assessments, control implementation, and monitoring and reporting.

For organizations that are still in the beginning of their journey to risk mitigate Shadow IT, an immediate automation of Business Managed IT workflows might be a step too far. In such cases, it is important to start adopting the mind-set of business ownership of IT risk through improved and simplified risk policies as well as business enablement programs, as this is the very first step for long-term business enablement, security and privacy of critical organizational data.


[Forr18] The Forrester New Wave (2018). Digital Risk Protection, Q3 2018, 2.

[Gart16] Gartner (2016). Gartner’s Top 10 Security Predictions. Retrieved from:

[Harv19] Harvey Nash / KPMG CIO Survey (2019). A Changing Perspective. Retrieved from:

[Kuli16] Kulikova, O (2016). Cloud access security monitoring: to broker or not to broker? Understanding CASBS. Compact 2016/3. Retrieved from:

[Syma19] Symantec (2019). Cloud Security Threat Report. Retrieved from:

Enterprise content management: securing your sensitive data

Good enterprise content management is a must to secure your sensitive data. Especially given the astonishing pace at which the data volumes keep growing. This growth is driven by the digitization of society and accompanying new opportunities. The objective is to enable organizations to fully profit from (new) opportunities around data and be in control over the use of data. At the same time, laws and regulations (such as the GDPR in Europe, the CCPA in California and the FIPPA in Ontario) are becoming increasingly strict about what can and cannot be done with data. This is challenging for many organizations due to the messy character of large parts of the data. Gaining control over unstructured data is a tough challenge. Unstructured data has been built up for years and is often “hidden” within folders on file servers. How can organizations explore the potential of digitization and at the same time comply with data-related laws and regulations?


Governments of the Industrial World, you weary giants of flesh and steel, I come from Cyberspace, the new home of Mind. On behalf of the future, I ask you of the past to leave us alone. You are not welcome among us. You have no sovereignty where we gather.”

These are famous lines from the 1996 declaration of independence by the libertarians, headed by John Perry Barlow. It was a time full of optimism about the societal effects of new internet technology and we had just started to explore the possibilities of this new cyberspace. The libertarians were predicting – or hoping – to build a new information Walhalla and even an independent republic, where governments and corporations had no influence. Almost 25 years later, we could not be further away from that scenario. The largest tech companies (FAANG, Facebook, Apple, Amazon, Netflix and Google) have changed our lives by changing the way we connect with each other and making information and products within hand’s reach. In turn, they have gained massive power or even near monopolies, the web itself has turned very commercial and concerns about the proper use of personal data have become one of the major societal problems. Historically, technology has always had two faces. On the one hand, new technology offers opportunities for progress and innovation. On the other hand, new risks arise, such as the abuse of personal data. The challenge is to foster the positive, while controlling the darker side. The digital transformation is no exception to this. We have witnessed a multitude of useful and sometimes groundbreaking innovations that have made our lives easier and more comfortable. But it has also become clear that we must find (new) ways to deal with the dark side of digital transformation. Considering these factors, it is hardly surprising that governments stepped up their efforts trying to govern what in the early days was intended to be a sovereign place.

A dilemma

Some of the dynamics of this cyberspace are still valid. One of them: information wants to be free. Not only “free” as in “at no cost”, but also “free” as in an “endless space to move around in”. Professor Edo Roos Lindgreen once drew a simple graph with two axes to illustrate this ([Webw12]) (see Figure 1). One axis depicts the decrease of control over the accessibility of data; the other illustrates the extent to which data is publicly accessible. According to Roos Lindgreen, information follows the second law of thermodynamics: the result is maximum entropy. All information will end in the upper right quadrant of the graph. This is a situation of chaos: the information circulates freely in an uncontrolled space. If this model is valid, the evolution towards maximum entropy is inevitable, and data that arrives in the upper right quadrant can never be moved back. Take a viral video: once it is online it can never be fully taken off the internet. The saying that “you can’t unscramble eggs” says it all.


Figure 1. Accessibility versus control of personal data. [Click on the image for a larger image]

Meanwhile, governments are trying to “unscramble eggs” in their efforts to regulate and improve the governance and security of personal data. In itself, this is understandable as we witness the risks and dangers of the unscrambled eggs nearly every day, such as the abuse of personal data. All in all, this leads to a dilemma where on the one hand information wants to be free and on the other hand laws and regulation aim to install boundaries to this freedom.

At the organizational level

The dilemma between freedom and control is challenging for society as a whole but is also valid for organizations. Many organizations are in the midst of a digital transformation. In this journey, they explore how they can manage and profit from data. This often includes the need for freedom to innovate with data. However, stakeholders expect organizations to process data securely and be transparent about their data processing activities. Laws and regulations, such as GDPR, CPPA and FIPPA, limit freedom accordingly.

It may be tempting to opt for a quite liberal approach when using different organizational data sources, as this helps facilitate data-driven innovation. However, decentralized data processing activities require effective data governance. This governance is not only important to comply with laws and regulations, but also in order to warrant reliable and trustworthy data. Data governance ensures that all parties involved use one version of the truth to base business decisions on. The stakes are high: proper data governance is vital for success in a data-driven society, as being in control over your data means better information to facilitate decisions.

From data to content

One of the solutions to deal with this dilemma is through master data management programs. Many organizations have created significant efficiency benefits and increased the level of information quality by implementing master data maintenance processes. This is because, thanks to these programs, master data objects are stored in one location. That way, other systems that make use of this information communicate with that one location, which serves as a single source of truth. Authorization management concerning these master data attributes only needs to be managed at the source, rather than in multiple locations. In these master data management programs, organizations clearly define which data objects are (strategically) important and implement structural management around these data objects. However, the challenge does not end there. The same principle should be applied to unstructured data as well.

As a result of digitization, data emerges from many new data sources. A significant part of organizational data is unstructured or semi-structured. Despite technological advancements that support the people who carry out business processes, the majority of these processes still require human interaction, and therefore the creation of content in some form. Examples of (semi)structured content are invoices, meeting minutes and photographs. Natural language is required to exchange information between business processes and parties. It is what makes these business processes human. Organizations must find ways to properly govern this part of the data pile too. Especially when it comes to personally identifiable and other sensitive information. That simply should not be dispersed over a chaotic unstructured data landscape.

Enterprise content management

This is where enterprise content management comes in. Content refers to the data and information inside a container ([Earl17]). Examples of such containers are files, documents or websites. Content is of a flexible nature – it can change over time and knows its own lifecycle. Enterprise content management makes it easier to manage information by simplifying information retrieval, storage, security and other factors such as version control. The promise is that it brings more efficiency and better governance over information. Implementing enterprise content management successfully is not a walk in the park. The way content is structured is highly diverse – as it is often entirely dependent on the way of working of its author or the business process responsible. There are often few standards regarding the structure of information across business units. As a result, the majority of organizations still struggle with enterprise content management.

The stakes are high in a time of ubiquitous data where processing information has become a key differentiator: organizations with good information management practices make better decisions and thereby have an advantage over their competitors. This is not because they have all the information available, but because they are able to limit the amount of information to a relevant portion that human brains can deal with. American Author Nicholos Carr ([Carr20]) is one of many who argue that too much information might just destroy our decision-making capabilities.

Moreover, privacy and security (and the laws and regulations in these related domains) are two key drivers for better management of organizational content. The larger the volume of content stored, the greater the risk that it contains sensitive information, which could lead to reputational damage if it ends up in the wrong hands. For instance, shared folders often contain data extracts from operational systems, which in turn, often contain personal data. What’s more, a lack of enterprise content management also leads to inefficiencies. Traditional content management systems force artificial structure through folders. Without a strong search functionality, information retrieval is difficult when the exact storage location of specific content is unclear. Without proper data classifications, unstructured data is difficult to find, use, manage and turn into information.

Enterprise content management model

Our view on content management is that it does not start with implementing tools and techniques. A holistic approach is used instead: a broad analysis of its relevance to an organization. To this end, we use an enterprise content management model based on five pillars: content, organization, people, processes and technology. This model is based on international market standards, such as DAMA DMBOK ([Earl17]), CMMI’s DMM ([CMMI14]) and EDRM ([EDRM]), as well as the publications and experiences of experts in the enterprise content management domain ([Mart17]).


Figure 2. Enterprise Content Management model is comprised around five pillars. [Click on the image for a larger image]











The model is valid for any data architecture. Even in an extremely traditional organization – with content stored in paper files – the approach will trigger the right questions and will lead to a well thought-out solution. Information that is written in natural language can be digitized, as is also the case with photos of letters. OCR enables interpreting and managing the information on these flat documents. The same goes for images: artificial intelligence helps understand images. In fact, over the years we have all contributed to that by validating that we are not robots. Millions of users that perform the Captcha test have trained these algorithms ([OMal18]).

Guidelines for solving the dilemma

As described earlier, we have a dilemma at hand: there is tension between freedom of information and the need to govern all this information. The aforementioned model helps to define a holistic approach, and by exploring the five pillars, organizations enable a tailor-made approach that suits their specific characteristics and challenges.

The following general guidelines may be helpful in dealing with the dilemma.

1 Create awareness that content management is more than compliance

In practice, many organizations explore the options of content management in order to deal with privacy and security concerns, triggered by laws and regulations. This is understandable as the stakes are high and media attention for non-compliance issues exists. However, a better recipe is to start with the virtues of content management for the organization. In the current business landscape, being data-driven is key to success. This means that there are great benefits in developing a controlled vocabulary and curate content about specific topics. Content curation around specific topics that are important to an organization can stimulate innovation and knowledge management. Once the value of this is recognized, it will be easier to keep up in terms of compliance.

2 Use the full potential of clever indexing tools

Indexing tooling offers great opportunities to index content that is stored in a variety of systems, even the highly unstructured ones such as shared file shares, SharePoints and OneDrives. Especially when opting for a decentralized approach, indexing tooling offers quick methods to identify personal information throughout the organization. For example, by using a regular expression for bank account numbers, they may be quickly identified. A regular expression is a sequence of characters (e.g. numbers or letters) that defines a search pattern. To illustrate, to find all Dutch telephone numbers, one could search for strings of 10 numbers long that start with “06” or “+31”. This technique is developed in theoretical computer science and formal language theory. Possibilities go much further than that, however. The application of entity extraction for example, allows for the quick identification of people, places and concepts that may be deemed sensitive. Executing the “right to erasure” is difficult to implement when business processes involve high volumes of content, such as customer letters, data extracts in Excel format and emails. Implementing content maintenance processes mitigates this problem. All personal data can be identified quickly, and the correct follow up action can be taken. Mature organizations automate these processes, based on set data retention periods.

3 Opting for privacy by design

A centralized approach for storing personal information offers better options for the governance of information. Personal information should not be scattered across network servers. By centrally storing personal information, risks are reduced as there is only one location for the application of governance rules. In other systems, pseudonymization techniques can be used to mask personal information. In practice, many of these applications have no need for information that can be traced back to an individual. This central approach creates flexibility to use information in decentralized applications. This way, the group of employees who do have access to personal information can be limited to the employees who truly require access; the customer service department for example. For other activities, such as the management of transactions of the analysis of customer behavior, personal information is removed or pseudonymized, making it impossible for the users to trace that information back to an individual.

Case: GDPR triggers data retention program in bank

As a result of GDPR, a bank had made steps regarding data retention, heavily relying on their employees to cleanse sensitive data. New policies were created to determine what data was collected and for how long it was retained. One of the issues concerned the fact that content can contain multiple types of personal data. Take a CV for example, it contains a name, telephone number, address and sometimes even a date of birth and/or a picture of a person. Because content, such as CVs, were stored on shared file servers and within email boxes, it was difficult for the data privacy officer to quantify the success of the steps they had taken.

We helped quantify efforts: we carried out an analysis to determine how much personal data was left. We analyzed a total of approximately 100,000 emails and 1,000,000 files. 60% of the content we found was redundant, obsolete and trivial (ROT). ROT content is content that no longer needs to be stored, as it does not have business value. A common example: duplicate files, e.g. multiple copies of the same manual stored in different locations. The oldest file in the analyzed dataset dated back to 1997. We found meeting notes from 17 years ago containing client information. Even employee notes calling a customer “very sweet” and another customer “very annoying”. The list goes on, we identified 6,000 social security numbers, 200 CVs and even 500 files containing personal medical information. We created lists with files to be cleansed, these were validated by the business and then automatically deleted by the IT department using an automated script. Within 2 weeks of work, we had reduced remaining personal data by 50% and identified next steps to get that number down to 0%.

Case: Professional services migrates to the cloud

A popular topic is phasing out file shares and moving to the cloud. A professional services company had this same ambition. Their question, however, was how to approach this migration to the cloud. They faced several challenges. Authorizations management on file shares was not effective, due to the use of many different user groups over time. Different user groups as well as individual users had obtained access to specific shares and folders, making it very difficult to determine the owner of specific content. As a result, it was not possible to ask the right owners what data should or should not be migrated to the cloud. What’s more, these authorizations could not be copied to the new environment, as they were no longer up to date. An entirely new authorization concept and structure was required. We helped this client carry out their migration by utilizing technology to simplify the process. We classified existing data, around cases that made sense to the organization’s operations. In this case, these “cases” were projects, clients and departments. For each project, client and department, new environments were created, and the relevant files were migrated to that environment. Files with sensitive information were automatically classified using regular expressions for personal data. The information within a file, was automatically recognized and redacted upon migration. A new version of the document was created, in which the sensitive information was blacked out. It was no longer readable nor retrievable by the end user. The original stored in a secure location, for a predefined period of time to make sure no valuable information would be lost.


In the current era of ubiquitous data, organizations face a new dilemma. On the one hand, information wants to be free to explore the (new) opportunities of this era. On the other hand, the (messy) information within an organization needs to be controlled. Laws and regulations have raised the bar in recent years. It is a complex challenge. The good news is that there are a number of promising techniques and concepts that help organizations deal with this complexity. Organizations that start with defining the benefits of content management – having a controlled vocabulary, better insights for decisions, improved knowledge management – are best prepared to deal with this dilemma. Our model offers them a guiding hand.


[Carr20] Carr, N. (2020). The Shallows: What the Internet Is Doing to Our Brains. New York: W. W. Norton.

[CMMI14] CMMI Institute (2014). Data Management Maturity (Dmm) Model (1.0 ed.).

[Earl17] Earley, S. & Henderson, D. (2017). DAMA-DMBOK: Data Management Body of Knowledge. Bradley Beach, NJ: Technics Publications.

[EDRM] EDRM Model. (n.d.). Retrieved on December 15, 2019, from:

[Eijk18] Eijken, T.A., Molenaar, C., Dashorst, I.M., & Özer, P. (2018). eDiscovery Spierballentest. Compact 2018/2. Retrieved from:

[KPMG16] KPMG (2016, February). Acht basis soft controls. Retrieved on January 1, 2020, from:

[OMal18] O’Malley, J. (2018, January 12). Captcha if you can: how you’ve been training AI for years without realising it. Retrieved on December 12, 2019, from

[Mart17] Martijn, N.L. & Tegelaar, J.A.C. (2017). It’s nothing personal, or is it? Compact 2017/1. Retrieved from

[Webw12] Webwereld Redactie (2012, March 26). Eén grote vrijwillige privacyschending (opinie). Retrieved on December 12, 2019, from:

Privacy pitfalls and challenges in assessing complex data breach incidents

In the past few years we have seen how the introduced data breach notification requirements have affected organizations in dealing with large scale data breaches. When looking at some practical example cases, we have seen that organizations struggle with gathering the right information about the nature and scale of the breach, especially information about the specifically affected individuals that they may need to be notified. Most challenges arise from both a data and a legal perspective. From a data perspective, organizations struggle in getting a complete and accurate overview of all affected data in the breach and the affected individuals. From a legal perspective, organizations need to assess to what degree the breached data can pose a high risk to the affected individual. A complex risk assessment needs to be performed and documented. This article will outline these challenges in detail and will conclude with recommendations for organizations on how to properly prepare themselves.


In May 2018, the General Data Protection Regulation (GDPR) came into effect, forcing organizations to comply with a set of legal requirements regarding data breaches. Some countries, such as The Netherlands, have already implemented a similar regulation prior to this.1

When looking into the likelihood of a data breach and the likelihood of having to undergo these data breach proceedings seem to differ per country. In a recent report from DLA Piper, we read that about 25% (40k) of all data breaches within the European Union came from the Netherlands ([DLA20]). With only about 3.3% of the total EU population, this seems out of proportion in relation to the reported breaches in other EU member states. The most sensible explanation is that the Dutch data protection authority is more active in the Netherlands and the fact that organizations already had a reporting obligation since 2016 in the Netherlands. We see that Germany is the runner-up with 37k in reported data breaches. The UK, Ireland and Finland are in 3rd, 4th and 5th place respectively.

When we zoom in on the number of data breaches being reported in the Netherlands, we see that most data breaches (around 73%) concern relative straightforward cases of sending personal information to the wrong recipient, either via e-mail or via physical post, or the loss of a laptop or other data carrier (5%). The more complex data breach cases, however, are the ones that showcase that the legal requirements as set forth in Articles 33 and 34 of the GDPR are a big challenge. Data breaches are related to hacking, malware, phishing or other data theft (4%), or personal data that has been incidentally published or a leak in the system allowing unauthorized third-party access (7%).

The legal playing field

There are two main articles in the GDPR that cover the data breach notification: Articles 33 and 34. According to GDPR Article 33, in case of a personal data breach the data controller should report the incident to the supervisory authority within 72 hours, describing the nature of the breach, assessing the impact of the breach and describing measures taken.

Article 34 states that in case of a high risk to the rights and freedoms of natural persons, the controller will communicate the personal data breach to the data subject without undue delay. The information to the data subject should clearly contain the nature of the breach and at least information about the likely consequences of the data breach and the measures taken or proposed to mitigate possible negative effects. Failure to comply with these regulatory requirements may cause high monetary and reputational damage to the organization.

The next section will further dive into the legal requirements in relation to the more complex data breach cases. These more complex cases will show the financial burden that organizations need to undergo by putting in hours of investigation, legal analyses and decision making in order to comply with all legal requirements. The fact that the supervisory authority is closely monitoring these high-profile cases makes it more important that these requirements are met.

Challenges in assessing legal requirements

In the introduction it was laid out that about 11% of the reported data breaches contain more complex cases (or at least, here in the Netherlands). When a data breach has a malicious source it already becomes quickly evident that the breach will fall into the category of a complex case. We can say the same about cases where there is a vulnerability or a leak regarding a database or server with consumer data. These kinds of cases have taught us in the last few years that the legal requirements of the GDPR can become a heavy burden in case an organization is ill prepared. The next section will – per requirement – outline the challenges an organization may be facing and the impact this will have on resources, timelines and in the end financial or reputational damage.

Reporting to the authorities

According to Article 33 of the GDPR, the data controller should notify the supervisory authority about the data breach within 72 hours. The notification should include the nature of the breach, the categories and approximate number of data subjects involved, and the categories and approximate number of personal data records involved. Next to that, the likely consequences for the individuals should be communicated and the measures to mitigate the negative consequences.

Nature of the breach

The nature of the data breach may in most cases be very clear. In cases where data has been published incidentally or a leak has (potentially) allowed unauthorized third parties to access the data, it is fairly straightforward to explain the nature of the breach. Also, in most cases of malicious intent, the nature is quite clear, when communicated in generic terms (e.g. malware attack, hacking, theft of a physical hard drive). In some cases, where personal data is being compromised and the organization learns of this fact due to external sources, the nature of the breach may not be clear. A thorough incident response investigation will be required to determine this.

Scale of the breach

More challenging than determining the nature of the breach, will be the scale of the breach. The GDPR is fortunately asking data controllers to come up with approximate numbers. This may however be hard to estimate within 72 hours after discovery of the data breach. We have seen cases where multiple systems where compromised during a malicious attack. Consumer data was stored in these systems and often contain data records of the same individual across different systems. The overlap of data subjects and categories of personal data records will make it hard to determine a set of unique data subjects and data records to report to the authorities. Especially in cases where the master data management has not been up to par with industry best practices. Organizations with a large consumer base such as banks, pension funds or insurance companies will face this challenge. We will further dive into this in “Reporting to individual data subjects”.

Likelihood of consequences and mitigating measures

The likelihood of consequences is easy to identify on a generic level. When facing a breach with more generic data such as names, e-mail addresses, phone numbers, etc., the consequences can be found in the area of phishing and scamming activities. When more data is added, more threats and adverse consequences can become apparent, such as spear phishing, extortion, or targeted theft. When assessing the extent to which individual data subjects need to be notified, the analysis of these adverse consequences is critical in complying with the regulatory requirements of a data breach. The result of the analysis and the consequential decision making should be carefully documented. The challenges that come along with this assessment will also be laid out in “Reporting to individual data subjects”.

Reporting the mitigating measures may be difficult to communicate to the authority within 72 hours of discovering the breach, since it is probably still being investigated. This also applies of course to all the other reporting requirements as discussed above. In order to meet data controllers halfway, the supervisory authority can allow data controllers to send a provisional or initial notification which can be adjusted or revoked at a later stage.2 When submitting a provisional data breach notification with regard to a large scale data breach, it is advisable to seek contact with the data protection authority and keep close communication with them with regards to the data breach. A data breach in itself is not (necessarily) a violation of the GDPR. Not handling a data breach in line with the GDPR requirements or without following up instructions from the data protection authority, however, is.

Reporting to individual data subjects

To get a good understanding of the realistic challenges of an organization when it comes to the reporting requirements of the GDPR, we must picture large-scale data breaches, mostly from a malicious external source (such as a hack, data theft or malware related incident). For example, the data breaches of British Airways3, T-Mobile4, Equifax5 or Marriot Starwood Hotels6 are key examples of where these reporting requirements probably took a lot of effort. These cases have in common that a select part of the client data was compromised and that per individual different categories of personal information was exposed to the breach. Assessing this breach and determining the impact for each individual can be a very extensive task, especially when these cases comprise hundreds of thousands or even millions of individuals.

The key question in determining whether or not an individual needs to be notified, is whether or not there is a high risk to the rights and freedoms of the individual. When this is the case, the data controller will communicate the data breach directly to the individual, including the potential adverse consequences and what precautionary measures can be taken by the individual. The European Data Protection Board has provided guidelines about the criteria that should be considered when assessing the likelihood of a high-risk impact for the individual (see Table 1, [WP2916]).


Table 1. Criteria to be considered when assessing the likelihood of a high-risk impact for the individual. [Click on the image for a larger image]

When assessing whether or not an individual is exposed to a high-risk adverse event regarding their rights and freedoms, a data controller needs to look at each single individual and determine whether or not they should be notified as well as the content of the notification. From a data perspective, this can be a humongous challenge, especially when the data management practices are not up to par. Determining the level of risk for each data point can also take significant effort, especially when the data has aged. These challenges will be addressed in the upcoming section.

Challenges from a data perspective

When looking into some specific data breach cases of the last two years, we have seen many challenges from a data perspective, for which a few examples will be given in this section. When the data controller is aware of which systems are compromised, the next step is to determine which personal data was stored in these information systems and what this data was about. When the level of maturity of data management practices within the compromised organizations is poor, then assessing the data will probably be the most challenging task in adhering to GDPR data breach reporting requirements.

Let’s focus on the idea that multiple systems containing consumer data have been compromised and the data quality / management within the organization is not of the highest standards. The following examples will provide master data challenges to determine for each impacted individual whether or not they should be notified.

For example, we will look into the fictional individual “Adam Smith”, who is a customer of an airline company. His personal information is in the customer master database, booking database, the payment database and in the Event database, where tickets for troubleshooting are stored. In these three systems, his personal data shows up as shown in Figure 1 (these are exaggerated examples).


Figure 1. Data challenge examples. [Click on the image for a larger image]

When we want to identify Adam Smith and check the personal data that is being kept of him which has been compromised during the breach, we see different data from different systems. Even data from the same system may indicate different information.

Challenge 1: Aggregate all the data of the same individual “Adam Smith” into one single overview of all his personal data

Since the individual “Adam Smith” is present in four different systems and is even found twice in the master database, we need to aggregate the data to understand which personal data is stored and what risks he may be exposed to across different systems,. Because Adam is registered differently across systems and each system contains different categories of data, there is no single unique identifier to use and identify one single Adam Smith. Ideally, each information system would have a reference to the same unique customer number. In practice, we have seen that this is not always the case and will be the root cause of complex data analytics to create a full picture of every single individual and their corresponding personal data records.

Challenge 2: Which “Adam Smith” data is the most recent data?

We may need to extract additional data from the systems to determine the registration date or last mutation date of the provided data record. Amending source data with additional data to determine the relevant records is going to be an additional layer of complexity to identify the unique users which should be notified under the GDPR.

Challenge 3: What is the age of the data records that is being shown?

A lot of companies struggle with the implementation of proper data retention procedures and controls. As a consequence, a lot of old data is still being stored in the information systems. In order to determine the relevance of this data, one needs a timestamp of when the data entered the system or when it was modified. Some data fields lose their relevance. For example, someone’s home address, telephone number or credit card may change in the course of 10 years. The same applies to someone’s license plate or IP address. These may be irrelevant already within 3 years’ time. Someone’s social security number or medical records are however never subject to change.

Challenges from a legal perspective

In case it is identified what categories of data have been compromised, it would be best to create an overview of all different profiles and tie risk classifications to each profile and determine whether or not these individuals will be personally notified or not (of possibly by what medium). Such a matrix will look like this: (insert example)

To fill in the risk profiles and determine whether or not each individual should be notified, the following activities can be performed:

  • Assess the value of the data (considering the nature of the breach)
  • Assess the potential risks for the individuals
  • Determine the impact of the age of the data (also, ‘actuality’, ‘accuracy’, or ‘timeliness’ of data)
Assess the value of the data

The value of the breached data can play a pivotal role in assessing potential threats. With regards to this whitepaper, the value of data is defined as what people with malicious intent could profit by potentially selling the information they have gained from your systems. Looking into the black market or “dark web” is a good starting point to assess the value. The value of personal information on the black market depends on the type of information and the combination of data available on the same individual. After doing some initial research, we see that the price of personal data varies quite a lot. Some basic information about individuals may provide you with a few dollars per record but adding bank account or credit card data to that will increase the value significantly. In the Netherlands, too, we have seen cases of selling personal data in combination with license plates is quite valuable. Combinations of personal and health information is two or three times the value of financial information alone as there are many more opportunities for fraud or blackmail of wealthy customers. For some example price ranges, please refer to Table 2.


Table 2. Market worth of privacy data. [Click on the image for a larger image]

Assess the potential risks for individuals

The second step in the assessment is to identify the potential risks for each individual. The potential risks of the breached information can be categorized under (at least) three threat scenarios: Identity Theft, Scamming and Leaking/Blackmailing. We will provide examples for each threat scenario below.

Identity theft

Customer information can be used to impersonate a customer or employee.

  • Acquiring funds or goods. Identity theft can be a tool to commit fraud to acquire funds or goods. The severity and impact to the individual in this is high, because the customers can suffer from financial losses, unless they can prove they are the victim of identity theft. An attacker could for example order subscriptions or goods online based on the personal information of the victim.
  • Framing for (illegal) activities. Identity theft can also be used to ensure other illegal activities cannot be traced back to the person that committed them. An attacker could scam people on online platforms, such as “Marktplaats” (online Dutch marketplace and subsidiary of eBay), while impersonating a victim of the data breach. The stolen personal data is used to convince the person that is being scammed of the legitimacy of the scammer. As a result, the victims of the identity theft may be harassed by the victims of the scam ([Appe18]).
  • Acquiring more personal information. An attacker can also contact organizations and impersonate a customer in order to obtain additional personal information about the customer. The attacker can directly request insight into the personal information kept by the organization based on GDPR, or ask questions to deduce personal information that the organization has of the victim. The information from the breach is used by the attacker to initially identify as a customer. Obtaining additional personal information is not the end goal, but a means of achieving another goal in one of the three categories. An attacker could for example attempt to obtain the document number for an ID card or driver’s license, which can then be used to create a fake digital copy of such as document. This can then be used in other identity theft schemes that require a copy of such a document, such as renting buildings ([Sama16]).

The stolen information can be used in different ways to scam the customer whose data has been stolen.

  • Generic scams. General contact information can be used to send spam, perform phishing attempts and attempt other generic scams. The impact of such generic scams on the victims depends on the success rate of the scams.
  • Tailored scams. Personal information, such as age and medical information, can be used to perform more tailored scams or target weaker groups. For example, older people generally have less digital experience, making them an easier target and chronically ill people are generally more willing to try new things to improve their health. Again, these techniques can be used to obtain money or credentials from the victims.
  • “Spear” scamming. More personal information, such as a BSN and medical information, can be used to attempt to convince the customer that the attacker is from an organization where the victim is registered or from an authority such as the police. The attacker achieves this by providing the victim with information about them, which should generally only be known to such organizations. Providing personal information about the victim increases the credibility of e-mails, letters and other interaction with the victim. Social engineering techniques can for example be used to trick people into transferring money or providing login credentials for online accounts.
Leaking or blackmailing

The stolen information can be leaked, or the victim can be blackmailed for money or other gains.

  • Sensitive information. In case of available personal information, such as medical information of high-profile individuals, like celebrities and politicians, the stolen information can be leaked, or the individual can be blackmailed with the threat of leaking the information.
  • Threatened identity theft. Victims can also be blackmailed with the threat of identity theft. This would have a high impact on them. The leaking of information can result in reputational damages for the victim, whereas blackmailing can result in either financial damage or reputational damage.
Determining the impact of the age of the data

Regulators, legal cases or current black market prices unfortunately do not tell us anything about the relevancy of the age of the data records. It is important to assess to what extent leaked personal data that is relatively old, can still impact the individual and to what extent the notification can help them to take steps to protect themselves from the effects of the breach. Even though leaked personal data is relatively old, the risk still exists that the breach may lead to physical, material or non-material damage for these individuals. This is especially applicable to cases where sensitive personal data is leaked, such as health data. The question that needs to be asked with regard to the impact of the breach for relatively old personal data is: could the breach still result in identity theft or fraud, physical harm, psychological distress, humiliation or damage to reputation of the individual? ([WP2916]).

To help make this assessment, some general statistics can provide some guidance (see Table 3).


Table 3. Statistics on data age. [Click on the image for a larger image]

Document decisions and communication

When the assessment concerning the risks for the individuals subject to the data breach has been completed, a communication matrix can be created to determine the risk level for each case (personal data types and data age) and if this risk level meets the threshold of ‘high risk’ as stated in Article 34 of the GDPR. It is very important to document and substantiate the assigned risk level and why it is – according to your analysis – below or above the indicated threshold of Article 34 of the GDPR. This will be your core rationale of why you are going to notify an individual or not.

When the data is prepared and the legal analysis and risk analysis have been completed, a communication scheme can be set up. There are different methods for reaching out to individual data subjects. This can be done by physical mail, e-mail, SMS or even by telephone. Depending on the available contact information and the efficiency, a decision can be made.

When sending out large communications to individual data subjects, one can expect to receive some sort of a response from those individuals. The individuals may have questions about the data loss, they may want to execute their privacy rights, or they may want to have their data deleted. It is highly recommended to anticipate these scenario’s by setting up a call center to answer questions and gather subject requests, set up a specific e-mail address to gather subject requests and complaints and more importantly, reserve resources to follow up on an expected peak of access and deletion requests under the GDPR.

Conclusion and how to prepare

When a more complex and/ or large-scale data breach occurs, an organization is under heavy stress and pressure, regardless of the legal requirements as set forth by the GDPR. Acknowledging this will help organizations understand why it is critical to assess your internal procedures, data management maturity and incident response capabilities thoroughly. Then one will have a decent understanding to what degree these challenges can be resolved effectively and efficiently in a timely manner. When we look at the root causes for the delays and challenges to adhere to the legal requirements that come with a data breach, it is recommended to assess how prepared you are, on the following topics:

  • (Master) Data Management: What is the quality of your master data and is your organization able to create insights on a person level (rather than a product or process level), which customer information is stored?
  • Data Retention: Which records of your customers are you keeping and how are your data retention policies carried out in practice? Do you have insights in the age of your data and whether or not you should still have this data of your customers?
  • Data Minimization: Which records are you keeping of your customers? Are there additional records being kept (for example in open text fields or document upload features) that should not be stored and retained of these individuals, according to your policies?
  • Do you have proper contact details of your customers? Are you able to contact them in an efficient manner and is this contact information up to date?
  • Do you have a data breach procedure? Are you testing or evaluating this procedure and is it robust enough to handle more complex data breach incidents? This may include a crisis management plan and follow-up communication plans.
  • Is your data encrypted at rest and at transit? Are you using encryption techniques that are robust enough to prevent unauthorized access to (potentially) leaked or stolen data?
  • Do you have insight in what data you are processing for third parties and can you isolate this data from the data for which you are data controller? What are the liabilities in the data processing agreement between you and the third party of whom you are processing data?
  • You can offer a credit monitoring service for victims of a data breach, to monitor whether or not identity theft has taken place.
  • Do you have a cyber insurance to cover incidents like these?

The above topics are certainly not a limitative set of steps that need to be taken, but merely a guiding set of questions you might want to ask yourself when preparing for a data breach.

It can be concluded that no data breach is the same and that every case has its unique characteristics, but when handling large sets of data and applying the same legal requirements, the challenges will be of similar nature and can be properly prepared.


  1. Already pre-GDPR, The Netherlands has implemented additional Articles to the Personal Data Protection Act regarding the reporting of data breaches to the authority, as per January 1, 2012 for telecom and internet service providers and January 1, 2016 for all organizations processing personal data.
  2. The Dutch Data Protection Authority allows data controllers to submit an initial data breach notification which can be revised afterwards.
  3. British Airways was fined GBP 183 million, because credit card information, names, e-mail addresses were stolen by hackers, who diverted users of the British Airways website to a fraudulent website to gather personal information of the data subjects.
  4. In March of 2020, the e-mail vendor of T-Mobile was hacked, giving unauthorized access to e-mail data and therefore personal information of T-Mobile customers. In 2018, unauthorized users also hacked into the systems of T-Mobile to steal personal data.
  5. Equifax systems were compromised through a hack in the consumer web portal in 2017. Personal data of over one hundred million people were stolen. Personal data containing names, addresses, date of birth and social security numbers.
  6. In 2018 and again in 2020, Marriott reported that their reservation system had been compromised. Passport and credit card number of 500 resp. 5 million customers were stolen.


[Appe18] Appels. D. (2018, August 10). Gerard zou zwembaden en loungesets verkopen, maar wist van niks. De Gelderlander.

[Armo18] Armor (2018). The Black Market Report: A look inside the Dark Web.

[CyRe18] Cynerio Research (2018). A deeper dive into healthcare hacking and medical record fraud.

[DHHS19] Department of Health & Human Services USA (HHS) (2019). HC3 Intelligence Briefing Update Dark Web PHI Marketplace. HHS Cyber Security Program.

[DLA20] DLA Piper (2020). GDPR Data Breach Survey 2020. Retrieved from:

[Hofm19] Hofmans, T. (2019, July 23). Naw-gegevens uit RDW-database worden te koop aangeboden op internet. Retrieved from:

[Hume14] Humer, C. & Finkle, J. (2014). Your medical record is worth more to hackers than your credit card. Retrieved from:

[Sama16] Samani, R. (2016). Health Warning: Cyberattacks are targeting the health care industry. McAfee Labs.

[Secu16] Secureworks (2016). 2016 Underground Hacker Marketplace Report.

[Stac17] Stack, B, (2017). Here’s How Much Your Personal Information Is Selling for on the Dark Web. Experian Blog.

[TrMi15] Trend Micro (2015). A Global Black Market for Stolen Personal Data.

[WP2916] Working Party 29 (2016). Guidelines on Personal data breach notification under Regulation 2016, 679, European Commission.