Skip to main content

A smart contract taxonomy

This study posits the existence of four distinct variations within smart contract technology and proposes a taxonomy to organize and categorize these types. We will exemplify three practical applications of this technology, showcasing how these examples effectively illustrate the categories outlined within the proposed taxonomy. This taxonomy can serve as the foundational basis for constructing a risk analysis framework.


This contribution will focus on smart contracts and explores one central question: which types of smart contracts must be distinguished? While the views presented here are based on academic research done in a legal context ([Vers23]), the definition of ‘smart contracts’ that this contribution maintains is purely technical: smart contracts are immutable computer programs that run deterministically in the contract of a blockchain platform (see [Anto19]). The legal aspects of such technology are relevant nonetheless, as smart contracts might be used in a manner that creates a considerable legal impact. Proposed practical applications of this technology concern transactions, transfers, or administrations of rights, interests, or entitlements that users rely on. Creating an environment in which such reliance is justified and protected, potential users of this technology ought to evaluate whether blockchain and smart contract technology can indeed produce the legal effect essential for their specific business case. Even for those business cases in which the technology might not perform a legal function, there might still be a legal risk. If the technology is used by an organization, it replaces software applications that might fulfill the same function but operate in a fundamentally different manner. This could, as is often touted, be cheaper, faster, or more reliable, but might also expose the organization to novel legal risks. An understanding of how blockchain and smart contract technology functions, how it is used in a specific organization, how it differs from the more traditional solutions that it replaces, and any interactions it may have with the relevant organizational context, will help mitigate any such future risks. A framework that outlines the effects, impact, and risks of this technology provides the guidance necessary for this: a smart contract taxonomy could form the basis of such a framework.

One important preliminary observation must be made: smart contracts are, despite their rather unfortunate name, not legal concepts. They are technological concepts. Therefore, any analysis of such concept must, at the very least, have due attention for their technological underpinnings and practical applications. Considering the above, this contribution will take four steps. First, a general overview of blockchains and smart contracts will be given. Secondly, the different types of smart contracts will be outlined. In this section, we will pay attention to types of smart contracts that might enjoy legal relevance. This is pivotal for those wishing to use this technology in a context where transactions are made in a manner that is enforceable and provides legal certainty for themselves, their partners, or their clients. Subsequently, in the third part, the practical impact of this taxonomy will be illustrated. This illustration will provide insights in the extent to which this technology is sufficiently mature and provides sufficient added value for organizations. Lastly, in the final paragraph, we will present evolutions, and applications of this technology in the context of which this taxonomy might be used. The overarching purpose of this contribution is to provide an overview of types and uses of smart contracts and provide guidance on how a taxonomy based on those types of smart contracts could be used by those considering using this technology.

Background and technology

Smart contract technology is rooted in a rather radical context. The initial proposal for smart contracts was published in Extropy, a journal that describes itself as a ‘Journal for Transhumanist Thought’ ([Szab96]). The decision to publish in this journal suggests a particular ideology, that of transhumanism. Central values of this ideology are ‘boundless expansion, self-transformation, dynamic optimism, intelligent technology, and spontaneous order’ ([More93]). These values suggest that the underlying ideology is effectively a rather extreme variation of techno-liberalism. Especially the principle of ‘spontaneous order’ makes this clear. Some have described this as ‘[an idea] distilled from the work of Friedrich Hayek and Ayn Rand, that an anarchistic market creates free and dynamic order whilst the state and its life-stealing authoritarianism is entropic’ ([Thwe20]). Such concepts were popular in the community that laid the groundwork for the technology that is in focus here. Known also as ‘crypto-anarchists’ or ‘cypherpunks’, the goal of this community was to develop technology that would enable economic and social conduct in a privacy-conscious manner and outside the reach of governmental authorities ([Ande22]). The efforts of this community have played a pivotal role in the technological developments that have ultimately culminated in blockchain-based smart contract platforms. As a result of this, the principles adhered to by this community are ingrained in the technology to this very day.

The extent to which this is the case becomes clear when blockchain-based smart contract platforms are compared to more classic technological solutions that might be supplanted or supplemented by this technology. Such technology might include, for example, online marketplaces, supply chain management tools, and payment solutions (see [Thol19] and [Reve19]). Blockchain and smart contract technology distinguishes itself from these classic solutions through five key aspects: the first three of which are a result of blockchain technology, whilst the last two are a result of the smart contracts capability that some platforms might have.

First, blockchain platforms are, in principle and up to a certain extent, immutable. This means that not one single party or group of parties can alter the state of information on the platform. This immutability applies on both a transaction and recordation level. The former of which is a result of the public-key encryption that is foundational to the platform, whilst the latter is a result of the way distributed consensus regarding the state of information is reached among the parties on the platform ([Anto17]). Secondly, the platform is transparent. A certain degree of transparency is necessary as the state of information on the platform is maintained by the parties collectively. This means that, rather than relying on a single centralized party charged with maintaining the state of information, the parties do so collectively. To perform the task necessary for this, certain information contained within the transaction and certain information regarding the transactions need to be available to the parties. A certain degree of transparency is therefore inherent to the system. This transparency, however, is not absolute. These platforms are built on a system based on public-key cryptography. This means that parties operate on these platforms using their public key. This public key therefore functions as a pseudonym. Examining the transparent platform can yield a wealth of information regarding transactions, including details such as the sender, recipient, value, and time. However, the cryptographic foundations of the platform do shield the identity of the natural persons behind the public key. Therefore, the third key aspect is pseudonymity.

Some blockchain platforms provide features that go beyond merely maintaining a record of past transactions. Such platforms provide the option for persons to program on the platform. If the programing that such a platform enables is sufficiently flexible and allows for sufficient complexity, it becomes possible to create entire software applications on that platform. Compare, for example, the Bitcoin blockchain with the Ethereum blockchain: where the Bitcoin blockchain is designed to transact with a cryptocurrency and, in light of this purpose, enjoys very limited programming capabilities on the platform itself, the Ethereum platform is designed from the ground up to enable the creation of decentralized applications. The Ethereum platform therefore incorporates a Turing-complete programming language that enables the creation of full software applications ([Bute13]). The term ‘smart contracts’ precisely denotes these software applications. This illustrates why smart contracts are technical concepts and not legal concepts (see on technology also [Weer19]).


Figure 1. Technology overview. [Click on the image for a larger image]

Smart contracts are, in other words, code that exists on a blockchain platform: if the platform allows for sufficient complexity and flexibility, it becomes possible to program that smart contract code into software applications, also referred to as smart contracts (see Figure 1). Smart contracts are therefore pieces of software rather than legal agreements. As a result of their software-character, the conditions contained within their code are executed automatically and independently of any human action. Moreover, smart contracts exist on the same platform as the assets that are being transacted with, and the records being modified through, the smart contract. This means that the smart contract can interact directly and immediately with those assets or records. No (third) party is required to give effect to the predefined consequences stipulated in the smart contract. Consequences as stipulated in the smart contract are, in other words, automatically enforced when the conditions are fulfilled. Hence, automatic execution and automatic enforcement are the final two characteristics introduced by smart contract technology.

A smart contract taxonomy

The purpose of a smart contract taxonomy is to organize the different variations of the technology that are currently being developed. Doing so provides a structure that can be used as the foundation of a more elaborate framework on the basis of which the legal risks created by this technology can be mapped out. The taxonomy distinguishes four types of smart contracts. It should be noted that a very similar taxonomy has been adopted by the European Law Institute as well ([ELI23]).

Type-1 smart contracts: software as a self-executory agreement

The first variation of smart contracts describes a piece of software in which the offer and acceptance coalesce. The relationship between the parties who transact by way of the smart contract is therefore governed by the smart contract ([Werb21]). It has been suggested that, in such a situation, the code might effectively ‘be’ the legal agreement as it constitutes the externalization of the parties’ consensus and proof of the content of the rights and obligations between the parties ([Tjon22]). Situations where this might be the case could, for example, be found in the context of decentralized finance (or ‘DeFi’). Think of, for example, platforms that enable parties to provide digital assets as security for a loan. Such platforms require smart contracts that stipulate and enforce the rights and obligations that the loans and securities require. If that smart contract is the sole instantiation of the agreement between the parties, that smart contract must be treated as defining the legal relationship. In such a case, the smart contract could be equated to the legal agreement.

Type-2 smart contracts: mere code

At their very core, smart contracts are nothing more than software. They are technological concepts rather than legal concepts. The great majority of smart contracts are just that; mere code. If such smart contracts do not fulfill any function that has a legal relevance, they are just software. This could be the case, for example, when a smart contract determines when a container leaves a ship that has entered a certain port. Such smart contracts might fulfill a pivotal function in a software suite but are of no legal relevance. These smart contracts are referred to as the second type of smart contract. Most smart contracts fall in this category.

Type-3 smart contracts: executory tools

The third variation in the taxonomy describes a situation in which a smart contract is distinct from a legal agreement, yet remains potentially legally relevant. In these situations, the smart contracts exist on-chain and parallel to a legal agreement that exists off-chain. In this case, the smart contract is used to give effect to the rights and obligations outlined in the legal agreement. Such a smart contract is therefore a tool that executes (part of the) legal agreements. Allen shows that smart contracts are ideally suited to be used as such executory tools ([Alle22]). If, for example, a soda machine, by way of a smart contract, orders a new batch of soda cans from the manufacturer, this smart contract is used to execute part of the overarching framework agreement that exists between the operator of the soda machine and the manufacturer of the soda cans ([Nave18]).

The smart contract is in hierarchical relationship with the legal agreement in which the smart contract is subservient to the legal agreement. However, the fact that the smart contract is subservient does not mean it is irrelevant or unimportant. After all, determining the content and validity of a legal agreement is done by assessing all relevant facts and circumstances, and the meaning that the parties to the agreement could reasonable have attributed to the agreement in light of those relevant facts and circumstances ([Kran20]). The technology is designed for contexts where parties transact remotely with minimal knowledge of each other’s identity. Consequently, reliance on factors beyond the smart contract executing the legal agreement is likely limited in determining the content and validity of such agreements. Therefore, as parties increasingly utilize this technology in a pseudonymous environment and lean on the smart contract as the executory mechanism, there is a diminishing pool of relevant facts and circumstances available for determining the meaning and validity of the underlying legal agreement. It follows from this that the more the parties apply this technology in a pseudonymous environment, and the more the parties rely on the smart contract as an executory mechanism, the less relevant facts and circumstances are available that can be used to determine the meaning and validity of the underlying legal agreement. In other words, the more parties rely on the smart contract as a tool to execute the separate legal agreement, the more important the smart contract will be in giving meaning to the legal agreement and determining the validity of that legal agreement.

Type-4 smart contracts: merger agreements

Lastly, there are smart contracts that exist in a form that makes them both machine-readable and human-readable. In the context of the taxonomy, this is the type-4 smart contract. An example of this is the Ricardian contract ([Grig22]). The feature of creating one single entity that is both machine-readable and human-readable at the same time, creates the option of creating a legal agreement and transforming it into a type-4 smart contract. Such a smart contract exists simultaneously on a blockchain platform in code, and therefore enjoys the benefits offered by the platform, while remaining susceptible to human comprehension. This fourth variation of smart contracts therefore describes an amalgamation that consists of two parts but exists as a single entity and, provided it meets the legal requirements, might be capable of producing legal effect. It must be noted that this final variation of smart contract technology is, at least to this day, largely theoretical.

Practical application

The preceding sections of this contribution have detailed the differentiating elements of the technology and how such technology might be categorized in a taxonomy that could be used to clarify the legal risks caused by implementation of smart contracting technology. See Table 1 for an overview of the taxonomy and an overview of examples of potential legal risks that might surface in the context of the different types of smart contracts. This final section aims to showcase three groundbreaking applications of the technology – currently being explored, tested, or even deployed – and to apply the taxonomic framework to these examples. Applying the taxonomy to these real-world examples will provide a general overview of the legal risks that exist and insights into the severity of such risks. Three such applications will be considered.


Table 1. Overview of taxonomy including potential legal risks. [Click on the image for a larger image]

Applications of smart contracting technology and use of the taxonomy

Blockchain technology has been used as a foundation upon which different applications have been developed. The most well-known and most successful of such applications are the cryptocurrencies. Revolutionary as they might have been, in their core these cryptocurrencies offer relatively limited application. Cryptocurrencies use the underlying technology to enable the exchange of value in a distributed environment. This means that transactions between persons are now possible without any centralized party charged with tasks that would commonly be performed by a centralized party. Such tasks include, for example, whether a party has the right to make a transaction, determining whether the party is who they claim to be, or whether the units that the party is attempting to transfer have not be transferred previously. Solutions based on this technology are gradually being adopted by more established financial institutions. The Hong Kong Stock Exchange, for example, has been testing this technology since 2016 to enable a more seamless trade between Hong Kong and Mainland China ([HKMA17]). Launched in October of 2023, the final product is built through smart contracts, optionally available to users, and is presented as providing a more connected and more transparent settlement platform ([HKEX23]). The smart contracts used in the context of this example are predominantly type-2 smart contracts, meaning that they are mere code and have no legal relevance. It might be that the smart contracts that are employed in the context of settlement might have some legal relevance, but since the code is unavailable it is impossible to determine whether and to what extent this is the case.

Additionally, an application of this technology that relies on smart contracts other than the type-2 smart contracts can be found within supply chain operations ([Thol19]). According to an IBM survey, there is extensive experimentation with this technology in the realm of supply chains, especially concerning operational and supply chain management ([IBM20]). In such contexts, it becomes crucial to accord due consideration to the legal risks involved. Smart contracts might be used in the context of a supply chain to confirm receipt of goods, record performance, and trigger payments. Such aspects are not only relevant from an operational perspective, but they might also be pivotal from a legal point of view in case a disagreement arises between parties regarding events that happened in the context of the services provided. Some of the smart contracts used in the context of the supply chains are likely to be qualified as a type-3 smart contract. They are executory tools that are used to give effect to the legal agreement or part thereof. As such, there are considerable legal risks that must be taken into account. Consider a scenario where goods are lost in transit, yet the smart contract records the arrival of the container carrying the goods in the harbor, subsequently triggering a payment. Designers and operators should take such eventualities into account. Important questions in this context concern striking a balance between the relatively immutable nature of the platform and the automatic enforcement of the smart contracts. From a legal point of view, such question could emerge in the context of, for example, mistake, fraud, or disagreements about the content of the legal agreement.

Finally, one particularly interesting application of this technology is the creation and transfers of tokens on blockchain-based tokenization platforms. Such tokens function as units on a platform that represent an asset ([Kona20]). Especially the trade of non-fungible tokens has garnered a great deal of attention over the last few years. Whilst for some it might be very exciting to hold a token that represents a cute picture of a cat or a monkey, the technology allows for much more relevant applications. It is, for example, technically possible to have a token represent a claim or a classic financial instrument (see for example [ABN23]).

ABN AMRO was the first bank in Europe to register a digital bond for a Midcorp client on the public blockchain ([ABN23])

‘The entire process of preparing, placing and documenting the bond was digital. Ownership was recorded on the blockchain in the form of tokens that the investors acquired after they had paid for the bond. To ensure custody and security of the investors’ unique keys, ABN AMRO uses a wallet for accessing the digital bond.’

This final example of the implementation of the technology in question potentially introduces type-1 smart contracts in addition to type-2 and type-3 smart contracts. If a platform creates the option to effectively securitize claims or traditional financial instruments by way of a token, and any acquisition or trade of such tokens is limited to the platform alone, it is likely that the smart contract is the sole instantiation of the agreement between the parties. As such, the smart contract should be equated to the legal agreement. This means that all classic legal risks regarding formation, interpretation, and potential vitiation exist on-chain.


Smart contracts have been central to several hypes over the last few years. Those hypes have come and gone, but the development of smart contract technology and the potential applications of this technology has continued. Such developments are slowly giving rise to credible applications that are generating actual business opportunities. The nature of the technology that is at the root of such applications is fundamentally different than the technology it might supplant, and as such it will generate novel risks. Considering the way in which the technology is being applied, the legal risks should not be underestimated. Due to the highly technological nature of these risks, their integration with the organization, and their potential severity, businesses should prioritize preventing these risks proactively rather than mitigating them reactively after they have materialized. The taxonomy presented here provides a clear overview of the different types of smart contracts that exist. Such an overview could help an advisory practice doing exactly that: the taxonomy can be used to leverage technological know-how and risk management expertise to assist businesses in navigating the novel risks that designing and implementing products based on this technology might create.


[ABN23] ABN. (2023). ABN AMRO registered first digital bond on public blockchain. Retrieved from:

[Alle22] Allen, J. G. (2022). ‘Smart Contracts’ and the Interaction of Natural and Formal Language. In J. G. Allen & P. Hunn (Eds.), Smart Legal Contracts: Computable law in theory and practice. Oxford University Press.

[Ande22] Anderson, P. D. (2022). Cypherpunk Ethics: Radical Ethics for the Digital Age (1st ed.). Routledge.

[Anto17] Antonopoulos, A. (2017). Mastering Bitcoin: Programming the open blockchain (2nd ed.). O’Reilly.

[Anto19] Antonopoulos, A. & Wood, G. (2019). Mastering Ethereum: Building Smart Contracts and Dapps. O’Reilly.

[Bute13] Buterin, V. (2013). Ethereum Whitepaper. Retrieved from:

[ELI23] European Law Institute. (2023) ELI Principles on Blockchain Technology, Smart Contracts and Consumer Protection. Retrieved from:

[Grig22] Grigg, I. (2022). Why the Ricardian Contract Came About: A Retrospective Dialogue with Lawyers. In J. Allen, Smart Legal Contracts (pp. 88-106). Oxford University Press.

[HKEX23] Hong Kong Securities Clearing Company Limited. (2023). Synapse Platform Launch. Retrieved from:

[HKMA17] Hong Kong Monetary Authority. (2017). Whitepaper 2.0 on Distributed Ledger Technology. Retrieved from:

[IBM20] IBM. (2020). Advancing global trade with blockchain. Retrieved from:

[Kona20] Konashevych, O. (2020). Constraints and benefits of the blockchain use for real estate and property rights. Journal of Property, Planning, and Environmental Law, 12(2), 109-127.

[Kran20] Van Kranenburg-Hanspians, K. & Derk, M. T. (2020). De kansen van blockchain technologie voor het contractenrecht. Overeenkomst in de rechtspraktijk, 1, 16-21.

[More93] More, M. (1993). Technological self-transformation: Expanding personal extropy. Extropy: Journal of Transhumanist Thought, 4(2), 15-24.

[Nave18] Naves, J. (2018). Smart contracts: voer voor juristen? Onderneming en Financiering, 26(4), 57-67.

[Reve19] Revet, K. & Simons, E. (2019). Start small, think big: Blockchain technology – the business case using the SAP Cloud Platform. Compact, 2019(1), Retrieved from:

[Szab96] Szabo, N. (1996). Smart Contracts: Building Blocks for Digital Free Markets. Extropy: Journal of Transhumanist Thought, 8(16), 50-53.

[Thol19] Tholen, J., De Vries, D., Van Brug, W., Daluz, A. & Antonovici, C. (2019). Enhancing due diligence in supply chain management: is there a role for blockchain in supply chain due diligence? Compact, 2019(4). Retrieved from:

[Thwe20] Thweatt-Bates, J. (2020). Cyborg selves: A theological anthropology of the posthuman (3rd ed.). Routledge.

[Tjon22] Tjong Tjin Tai, E. (2022). Smart Contracts as Execution Instead of Expression. In J. Allen, Smart Legal Contracts (pp. 205-224). Oxford University Press.

[Vers23] Verstappen, J. (2023). Legal Agreements on Smart Contract Platforms in European Systems of Private Law. Springer.

[Weer19] Van der Weerd, S. (2019). How will blockchain impact an information risk management approach? Compact, 2019(4). Retrieved from:

[Werb21] Werbach, K. & Cornell, N. (2021). Contracts: Ex Machina. In M. Corrales Compagnucci, M. Fenwick, & S. Wrbka (Eds.), Smart contracts: Technological, business and legal perspectives. Hart.

Securing the quality of digital applications: challenges for the IT auditor

Since the advent of digital solutions, the ongoing inquiry into their reliability and security has been a central concern. An increasing number of individuals and companies are asking for assurance. There is a need for standards to report on the quality of the use of digital solutions. Primary responsibility rests with an organization’s management; however, incorporating an independent IT auditor can provide additional value.


Digital developments are happening at lightning speed. We are all aware of the many digital applications and possibilities in both our business and personal lives. Often, however, we only know and use 10 to 20 percent of the application possibilities of current solutions, and yet we are constantly looking for something new. Or is this all happening to us from an ever-accelerating “technology push”? The covid pandemic that started in 2020 showed us once again that digital tools are indispensable. Digital tools enabled us to remain connected and operational, facilitating ongoing communication among us.

How do we know if digital applications and solutions are sufficiently secure? Do the answers generated by algorithms, for example, reflect integrity and fairness? Are we sufficiently resilient to cyber-attacks and are we spending our money on the right digital solutions? These questions are highly relevant for directors and supervisors of organizations, as they must be able to account for their choices. Externally, the board report provides the basis for policy accountability. It is primarily retrospective in nature and has an annual cycle. The board report could explicitly discuss the digital agenda. The professional association of IT auditors (NOREA) is investigating whether an (external) IT audit ‘statement’ ([NORE21]) could also be added (see also this article on the new IT audit statement). Accountability for the quality of digital applications and whether everything is done securely, with integrity and effectively takes on new dimensions now that developments are happening at lightning speed, and everyone is connected to everyone else. Administrators, regulators as well as end users and/or consumers are looking for assurance that the digital applications and the resulting data are correct. Validation through assurance by an IT auditor serves as an effective tool for this purpose. A confirmation of quality on the digital highway must and can be found.

These issues are at play not only within organizations, but also in broader society. Protecting privacy is firmly under pressure, the numerous digital solutions are building a continuous personal profile. Also, there are painful examples of the use of algorithms in the public domain ([AR21]) that have seriously harmed a number of citizens. Responsible development toward more complex automated applications requires better oversight and quality control, according to the Court of Audit in its report on algorithms in 2021 ([AR21]). Issues of digital integrity, fairness, reasonableness and security have taken on social significance.

Coupled with the introduction of the Computer Crime Act (WCC I), an explicit link to accountability for computerized data processing emerged for the first time in the 1980s. Meanwhile, the Computer Crime Act III (WCC III) ([Rijk19]) has been in force since 2019, which takes into account many developments in the field of the Internet and privacy. As the final piece in the chain of control and accountability from the WCC I onwards, the auditor must explicitly express an opinion on the reliability and continuity of automated data processing as far as relevant for financial reporting according to Civil Code 2, article 393 paragraph 4. Over four decades have passed, and we now grapple with an expanding array of legislation governing the control of digital solutions. These solutions extend beyond administrative processes to impact all core business functions, bringing with them a shift in the perspective on associated risks.

In short, it’s time to consider how quality on the digital highway (such as security, integrity, honesty, efficiency, effectiveness) can be assured. How can accountabilities be formed, what role do managers and supervisors play in this, and how can IT auditing add value? As indicated, these questions play a role not only at the individual organizational level, but also at the societal level. For example, how can the government restore or regain the trust of citizens by explicitly accounting for the deployment of its digital solutions?

IT auditing concerns the independent assessment of the quality of information technology (processes, governance, infrastructure). Quality has many partial aspects; not only does it involve integrity, availability and security, it also involves fairness and honesty. The degree of effectiveness and efficiency can also be assessed. To date, the interpretation of IT auditing is still mostly focused on individual digital applications and still too limited when it comes to the entire coherence of digital applications that fit within the IT governance of an organization. IT auditing can be an important tool in confirming the quality or identifying risks in the development and application of digital solutions if it is used more integrally. This establishes a harmonious interplay between the organization’s responsibility for its IT governance and the validation of its quality by an IT auditor.

Technology developments

The COVID crisis has undeniably brought remote work to the forefront and has heightened the significance of adaptable IT. Several emerging trends underscore the landscape of digital solutions and advancements.

What’s noteworthy is that a considerable number of organizations exhibit an intricate blend of technology solutions, incorporating both legacy systems and contemporary online (front-office) solutions. Ensuring data integrity, keeping all solutions functioning in continuity, being able to make the right investments and paying for maintenance of legacy solutions, and planning for all of that is certainly not an easy task.

Let’s briefly highlight a few trends commonly cited by multiple authors ([KPMG20]; [Wilr20]):

  • Flexible work is becoming the norm. Last year, the cloud workplace – more than predicted – grew in popularity. Employees had to work from home, which requires a flexible and secure IT workplace.
  • Distributed cloud offers new opportunities for automation. The cloud will also continue to evolve, continuously creating new opportunities that support business growth. According to Gartner analysts ([Gart20]), one of these is the distributed cloud. It can speed up data transfer and reduce its costs. Storing data within specific geographic boundaries – often required by law or for compliance reasons – is also an important reason for choosing the distributed cloud. The provider of the cloud services remains responsible for monitoring and managing it.
  • The business use of artificial intelligence (AI) is increasing. Consider, for example, the use of chatbots and navigation apps. This technology will be increasingly prominent in business in the near future. The reason? Computer power and software are becoming cheaper and more widely available. AI will increasingly be used to analyze patterns from all kinds of data.
  • Internet of Behaviors. Data is now the lynchpin for much of business processes. Data provides insight and therefore plays an increasingly important role in making strategic decisions. This data-driven approach is also applied to changing human behavior. We also call this the Internet of Behaviors. Based on these analyses, suggestions or autonomous actions can be developed that contribute to issues such as human safety and health. An example is the smartwatch that tracks blood pressure and oxygen levels and provides health tips based on those data.
  • Maturity of 5G in practice. In 2020, providers in the Netherlands rolled out their first 5G networks. With 5G, you can seamlessly stay connected on the move or in any location without relying on Wi-Fi. Apart from higher data upload and download speeds, the big changes are mainly in new applications, especially in the field of the Internet of Things. Examples include self-driving cars and a surgeon operating on his patient a thousand kilometers away via an operating robot. Such applications are promising.

Management responsibilities

Driving and overseeing digital solutions is not a given. “Unknown makes unloved” still plays tricks here. The complexity of technology deters, the mix of legacy systems and new digital solutions does not make it very transparent, many parties manage part of the technology chain and the quality requirements are not always explicit.

Still, some form of “good governance” is needed. Fellow Antwerp professor Steven de Haes ([DeHa20]) has gained many insights in his studies on IT governance. In his view, governance needs to address two issues concerning digital solutions. The first is whether digital risks are managed, which requires a standard to test against. In line with the COSO framework (COSO: Committee of Sponsoring Organizations) often used in governance issues, (parts of) the international CoBiT framework (CoBiT: Control Objectives for Information Technology) ([ISAC19]) can be chosen. Management explicitly identifies the applicable management standards for digital solutions, ensuring the clear establishment of both their design and operational processes.

The second question is strategic in nature: are the digital developments correct? Is the strategy concerning the deployment of digital solutions correct and are the investments required correct? Answering this requires a good analysis of the organizational objectives and the digital solutions needed to achieve them. As indicated earlier, the main issues are effectiveness and efficiency.

Establishing a robust organizational foundation begins with a well-structured organizational setup. This often involves using a “layer model” to arrange the various responsibilities. The primary responsibility for ensuring the proper use of digital solutions rests squarely on the shoulders of first-line management. This can be assisted by a “risk & control” function that can act as a “second line” to help set up the right controls and perform risk assessments. The second line can also set up forms of monitoring on the correct implementation and use of the digital solutions. Then, as a third line, an internal audit function can assess whether the controls in and around the digital solutions are set up and working properly; if desired, the external audit function can confirm this as well. In short, a layered model emerges to collectively ensure the quality of digital solutions.

Given the tremendous speed of digital change, continuous new knowledge of technology is needed. Effectively coordinating this effort while maintaining a focus on the quality of solutions and acknowledging their inherent limitations is the key to successful governance. It is not a static entity, continuously changes in the chain has to be evaluated and adjusted if necessary. Conceivably, the IT function (the CIO or IT management) could organize a structural technology dialogue that starts with knowledge sessions, addressing the quality of digital applications. End users and management share the responsibility of clearly defining quality requirements, overseeing them through change processes, and ensuring the ongoing monitoring, or delegation of monitoring, to guarantee the quality of digital applications and data.

The suppliers of the digital solutions also play an important role. They have to be good stewards and provide better and safer solutions. This does not happen automatically, as is regularly the case; the focus is more on functional innovation than on good management and security. The buyers of the solutions also still question the providers too little about a “secure by design” offering. Proper controls can, and in fact should, already be built in during solution design.

Are the new digital solutions becoming so complex that no one can determine the correctness of the content? From a management perspective, we cannot take such a “black box” approach. We cannot accept, for example, deploying a digital application without knowing whether it works safely. Management should pause and prioritize organizing knowledge or acquiring information about the quality before justifying further deployment.

Challenges for the IT auditor

These quality issues can be answered by IT auditors. In the Netherlands, this field has been organized for more than thirty years, partly through the professional organization NOREA (Dutch Association of EDP Auditors)1 and university IT audit programs.

The IT auditor has a toolbox to assess digital solutions on various quality aspects. In increasing number of auditing and reporting standards have been developed to provide clients with assurances or a correct risk picture.

On the positive side, current IT auditing standards can already answer many questions from clients about digital solutions. The key is for IT auditors to adequately disclose what they can do and to work with regulators to enrich the tools. The IT auditor has to use simpler language to clarify what is really going on. Clients can and should sharpen their questioning and take responsibility themselves, such as establishing the right level of control.

IT auditors are currently still mainly looking for technically correct answers and methodologies, while a dialogue is needed about the relevant management questions concerning IT governance. What dilemmas do managers and regulators experience when determining the quality level of digital applications and what uncertainties exist? This is what the IT auditor should focus on. Starting from a clear management question, the IT auditor’s already available tools listed below can be used in a much more focused way.

From an auditing perspective, when outsourcing, the standard ISAE 3402 (ISAE: International Standards on Assurance Engagements)2 was developed to keep both the auditor and the client organization informed about the quality of the audits performed by the service organization. The emphasis lies on ensuring the reliability and continuity of financial data processing. The resulting report is called a SOC 1 report (SOC: Service Organization Control).

An ISAE 3402 audit requires proper coordination on the scope of work and the controls to be tested (both in design and in operational operation). The performing IT auditor consults with both the service organization and the receiving customer organization to arrange everything properly. This also involves specific attention to both the “Complementary User Entity Controls” (CUECs), the additional internal control measures that the customer organization must implement, and the “Complementary Subservice Organization Controls” (CSOCs), the control measures that their possibly deployed IT service providers must implement. Frequent consultations occur with the client organization’s auditor, who incorporates the ISAE 3402 report as an integral part of the audit process.

The scope of an ISAE 3402 audit can be significant and already provide a solid basis for quality assurance of digital applications. An example from IT audit practice involves a sold division of a company that is now part of another international group. The sold division has plants in over 30 countries, all of which still use the original group’s IT services. A test plan has been set up to test the relevant general computer controls (such as logical access security, change control and operations management, also known as “general IT controls”), and all relevant programmed financial controls in the selected financial systems. In this example, this yields a testing of over eighty general computer controls and over two hundred programmed controls by a central group audit team and audit teams in the various countries.

Another assurance report is an ISAE 3000 report, which is prepared to demonstrate that the internal management processes an organization has in place are actually being carried out as described. Basically, this standard was developed for assurances about non-financial information. This may take the form of an ISAE 3000 attestation (3000A), wherein the organization internally defines and reviews standards and controls, with the IT auditor subsequently confirming their effectiveness. Alternatively, it can manifest as a 3000D (“direct reporting”), involving collaborative definition of review standards and controls by both the organization and the IT auditor.

The ISAE 3000 report (also referred to as SOC 23) can focus on many issues and also has multiple quality aspects as angles, such as confidentiality and privacy. Standard frameworks have since been established for conducting privacy audits, for example ([NORE23])4 based on ISAE 3000. The North American accounting organizations, including AICPA, CPA Canada, and CIMA5, have collaboratively developed comprehensive standard frameworks, such as SOC 2 modules on Security, Availability, Processing Integrity, and Confidentiality6. These are readily applicable to IT and SaaS services and are increasingly being embraced by IT service providers in Europe. For specific IT audit objects, such as specifically delivered online services/functionalities, these can be further focused or expanded with IT (application) controls relevant to the customer organization.

As a final variant, agreed-upon specific work can be chosen, referred to as an ISAE 4400 report. Users of the report then have to form their own opinion about the activities and (factual) findings that are presented by the IT auditor in the report.

In recent years, there has been plenty of innovation within the field of IT auditing to also assess algorithms, for example, and make a statement about them. Consider the issue of fairness and non-biased data. An interplay between multiple disciplines unfolds to comprehend the risk landscape of intricate digital solutions and offer assurances. IT auditors are partnering with data specialists and legal experts to ensure the reliability of algorithms.

Over the past 18 months, there has been a growing discourse regarding the potential inclusion of an IT audit statement within or as an addition to a company’s annual report. Specifically, the company would need to articulate its stance on digital solutions, their management, and, for instance, the associated change agenda. An IT auditor could then issue a statement in this regard. The professional association of IT auditors has developed a plan of action to actively develop this IT report and the communication about it in the coming year. There is ongoing consideration regarding the level of assurance achievable through the opinion; currently, we acknowledge a reasonable and limited degree of assurance from the statement system. Clients naturally seek maximum or, perhaps better, optimal assurance. In other words, the assurance they seek is not always found in an IT audit statement. Even better would be if the communication also provides assurance into the future, an area still untrodden by IT auditors.


As indicated earlier, tools already exist for the IT auditor to confirm the quality of digital applications. Clients must take responsibility to better understand digital applications and set up the corresponding IT governance. IT auditors can improve their communication, can empathize even more with management’s (their clients’) questions, and also provide understandable reports.

Addressing pertinent social concerns related to the implementation of digital solutions involves conducting a comprehensive risk inventory and evaluating the effectiveness of the existing controls. In addition to the traditional concerns focused on reliability and security, issues of effectiveness, efficiency, privacy and fairness come into play. The resilience of digital solutions is also an urgent issue. In the EU, the Network and Information Security Directive (NIS2 Directive)7 and the Digital Operations Resilience Act (DORA)8 for financial institutions have been established to strengthen digital resilience. The regulator of publicly traded companies in the United States (SEC) has also issued guidelines for annual reporting on cyber security (risk management, governance) and interim reporting of serious incidents. ([SEC23]).

The concept of secure by design is anticipated to become increasingly prevalent, as technology vendors recognize the necessity of implementing robust controls during solution deployment. Some suppliers also provide mechanisms to set up continuous monitoring, where the controls put in place are assessed for continuous correct operation and exceptions are reported. Management also plays an important role in this regard; embrace the principles described above. Remember that it is more effective and efficient to design controls during the change of digital solutions than to fix them afterwards.

If more and more continuous monitoring is provided, the IT auditor can move toward a form of continuous auditing, providing assurances about the deployment of the digital solution at any time. The “anytime, anyplace, anywhere” principle then becomes a reality in IT auditing. A nice, relaxing prospect within all the digital speeds.


  1. See
  2. See, ‘Standards and resources’.
  3. SOC 2 deals primarily with security (mandatory), availability, integrity, confidentiality and/or privacy, as outlined in the SOC 2 guidelines issued by the Assurance Services Executive Committee (ASEC) of the AICPA.
  4. There is a Dutch and an English version of the Privacy Control Framework.
  5. AICPA: American Institute of Chartered Professional Accountants; CIMA: Chartered Institute of Management Accountants.
  6. See [Zwin21] for an article on SOC 2 and [AICP23] for AICPA and CIMA standards.
  7. See [NCSC23].
  8. See [Alam22] for an article on DORA.


[AICP23] AICPA & CIMA (2023). SOC 2® – SOC for Service Organizations: Trust Services Criteria. Consulted at:

[Alam22] Alam, A., Kroese, A., Fakirou, M., & Chandra, I. (2022). DORA: an impact assessment. Compact 2022/3. Consulted at:

[AR21] Algemene Rekenkamer (2021, 26 januari). Aandacht voor algoritmes. Consulted at:

[DeHa20] De Haes, S., Van Grembergen, W., Joshi, A., & Huygh, T. (2020). Enterprise Governance of Information Technology (3rd ed.). Springer.

[Gart20] Gartner (2020, 12 Augus). The CIO’s Guide to Distributed Cloud. Consulted at:

[ISAC19] ISACA (2019). COBIT 2019 or COBIT 5. Consulted at:

[KPMG20] KPMG (2020). Harvey Nash / KPMG CIO Survey 2020: Everything changed. Or did it? Consulted at:

[NCSC23] Nationaal Cyber Security Centrum (2023). Summary of the NIS2 guideline. Consulted at:

[NORE21] NOREA (2021). Nieuwe IT check: NOREA ontwikkelt IT-verslag en -verklaring als basis voor verantwoording. Consulted at:

[NORE23] NOREA (2023). Kennisgroep Privacy. Consulted at:

[Rijk19] Rijksoverheid (2019, 28 February). Nieuwe wet versterkt bestrijding computercriminaliteit. Consulted at:

[SEC23] SEC (2023, 26 July). SEC Adopts Rules on Cybersecurity Risk Management, Strategy, Governance, and Incident Disclosure by Public Companies [Press Release]. Consulted at:

[Wilr20] WilroffReitsma (2020). ICT Trends 2021: dit zijn de 10 belangrijkste.

[Zwin21] Zwinkels, S. & Koorn, R. (2021). SOC 2 assurance becomes critical for cloud & IT service providers. Compact 2021/1. Consulted at:

Fifty years of IT auditing

About fifty years ago, IT audit made its appearance in auditing, which was also the reason for exchanging professional technical developments in a new journal called Compact. Of course, a lot has changed since then, but certain activities – albeit in a new look – have not changed all that much. As has been said so often in those fifty years, quite a lot is going to change, not only because the approach to auditing itself is constantly changing, but also because IT and audit techniques are constantly evolving, such as the emerging AI. What does this mean for the profession? Enough reason to take you on a fifty-year journey back in time and twenty years ahead in the development of IT auditing.

For the Dutch version of this article, see: Vijftig jaar IT-audit in de accountantscontrole

It started with the substantive audit

The history of IT audit (then called EDP audit [EDP: Electronic Data Processing]) begins some fifty years ago, hence the 50th anniversary of Compact. IT audit is immediately entirely dominated by auditing, because IT audit is developed by major accounting firms. At that time, the approach to auditing was still almost entirely centered on substantive1 auditing.


Figure 1. Data-oriented (substantive) prevails. [Click on the image for a larger image]

The quality of the audited company’s IT is not relevant at all, because the auditor takes extensive samples and performs a lot of detailed checking. System-oriented auditing is not yet an option. The samples obviously have to be mathematically justified and determining the sample and the items to be considered still turns out to be quite difficult in connection with the various types of sampling routines and choices such as stratification, negative or no negative items, periods, sorting and, of course, the random nature, et cetera. This is where the IT auditor first appears on the scene. The IT auditor is then primarily a programmer, because with some theoretical sampling knowledge and knowledge of the client’s files, the IT auditor can provide excellent support. The IT auditor can now provide advance insight into the items in the file, allowing the selection by the auditor to be more effective and efficient. However, good knowledge of and experience with programming is important, because standard audit software does not yet exist. Often programming is still done in languages such as COBOL (Common Business Oriented Language, a language that is conceptually almost incomparable with today’s programming languages), at that time the standard software for administrative applications. In addition, you had to be good at (re)typing, because each program line had to go individually into a punch card. The financial auditor/financial IT auditor does not yet have a computer, so the processing has to be done on the client’s computer or, in exceptional cases, on the computer of a befriended relation, for example an insurance company, because service agencies are still rare. Everything is mainframe-oriented! The IT auditor already learns something new, however: the role of system software as well as the risks of that system software, for example if access security and logging is not properly set up. Finally, the IT auditor must of course ensure that his programs and data have not been tampered with. The first generation of IT auditors is relatively technically savvy.

Fortunately, the IT market is also beginning to see the financial auditor (IT auditor) as a target audience, and the first standard audit software packages are cautiously appearing on the market. Best known from that time is CARS, a large COBOL program with sampling routines and counts in which the IT auditor can add their own COBOL rules to make it organization specific. Since the laptop hasn’t been invented yet, the average IT auditor walks around with hefty suitcases with all those punch cards (having them mixed up would be a total drag …). But it’s still relatively easy as the file structure is sequential.

Introduction of the database

Shortly after, the database phenomenon makes its appearance. COBOL is not that suitable, and databases get their own supporting software. The IT auditor soon learns that using that database software is easier than CARS, although in the beginning that database software is not audit software, but mainly query software. Integration obviously does not take long and there is audit software for several database technologies, such as the independent package Culprit for IBM mainframes, for example. The problems at that time mainly involve accessing the file carriers (usually large tapes or very sensitive disks), which again are very specific to a certain type of machine. In short, they are only applicable to large known computer systems and large customers and therefore quite specialized. In the sixties and seventies, a large accounting firm like (now) KPMG had as many as thirty programmers who only programmed in the context of the annual audit.

PC becomes widely available

The big break came in the early 80s. The PC made its appearance and so did the floppy drive (still 8-inch format). This brought medium-sized organizations into the picture to support the financial auditor on duty. Again, audit software lags behind, because the (existing) mainframe packages do not run on those PCs and the floppies do not fit into a mainframe. KPMG is even creating its own software package, of course again focused on determining and taking samples, and on making all kinds of calculations to match the client’s financial records. There will even be an additional module that allows multiple files to be compared, a feat in those early days of the PC. When computerized financial accounting becomes commonplace, standard packages also become widely available. ACL and a little later IDEA are a few examples.

Need for greater understanding of security

In the 80s and certainly in the years that followed, the realization dawned that there were risks associated with all this IT, first in terms of security and then in terms of reliability. The financial auditor’s clients also felt a greater need to understand this. As a result, IT auditors are increasingly taking on the role of specialists who also go to the client on behalf of the financial auditor, not just to retrieve data files, but to assess the quality of the IT and advise on it. First the physical security of the IT environment is revealed in the survey, later logical access. Tooling is still virtually unavailable on the market for that, which means that the IT auditor has to examine many specific operating systems and databases and learn how security is organized.

Although PC use increases the number of data analyses to be performed, the number of programmers decreases quite a bit, because the creation of the analyses takes much less time due to standard applications and the relatively small-scale environment in which the software can operate.

An end to file analysis?

In the 90s, system-oriented auditing is strongly on the rise and the “traditional” use of audit software declines rapidly. The previously mentioned group of as many as thirty programmers at a large accounting firm has disappeared entirely, although some of them are able to advance as ‘regular IT auditors’. Yet this does not mean the end of file analysis. There are quite a few ‘standardization’ attempts, especially regarding the widely used SAP package. However, because of the many setting options, SAP still turns out not to be as standard as perhaps thought. The idea arises to create a front-end part for the extraction of data from the SAP databases that can be made customer specific or generation/version specific. The data is collected in a “meta database” for analysis and production of reports that strongly meet the auditor’s needs. This back-end part had to be highly standardized. Of course, practice turns out unruly because the front-end part always needs adjustments after new SAP versions or implementations, but the demands of the financial auditor also keep changing as more information and exceptions are obtained from the data, which in turn need to be explained. The financial auditor has their hands full because the cost-benefit picture is constantly under scrutiny. The benefits for the insights and security of the auditor’s audit approach do not always outweigh the effort required of constantly adapting SAP analyses.

Nevertheless, the seed has been planted and attempts are being made to revive data analysis in more industries, such as finance, where mostly self-developed systems at institutions predominate. The front-end part (data extraction) will always be variable here, but the back-end part (analysis and reporting) can then fit well with the auditor’s audit approach. Because of the cost of developing such solutions, the approach is primarily international. However, this adds up to a much wider range of financial systems among auditees (front-end complication) and a wider range of auditors’ requirements (back-end complication). Partly for this reason, only a few solutions were developed and not long-lived either.

The transition to system-based auditing

The IT auditor is already involved in examining processes and systems in the 80s. KPMG’s IT auditors develop the CASA method (Course Approach to System Audits), which is adopted by the NBA (professional body for financial auditors in the Netherlands. ed.) (then NIVRA) in the publication FASA (Factual Approach to System Audits; see [Koed85] and [NIVR88]). The objective is still mainly: ‘understanding the business’. In the 90s, system-based auditing emerges and there is more need for concrete insight into the processes and control measures. The IT auditor adapts the FASA method and Business Process Analysis (BPA) is born, where automated and manual internal control measures are explicitly recognized separately and per process step/risk. This distinction is important because the controls are different. For the IT auditor, this approach means a serious new object of investigation and assessing the automated controls against the general IT controls, especially change and test management and logical access security. So again, evidence of the proper functioning of the automated (application) controls must come from a system-based audit approach, i.e. entirely in line with the financial auditor’s audit approach.


Figure 2. The balance has tipped toward system-based. [Click on the image for a larger image]

With the introduction of the Sarbanes Oxley (SOx) Act in 2002, much emphasis is placed on internal controls at companies. Pressured by the PCAOB regulator and the requirements of SOx 404, the field of system-based auditing is developing rapidly. The question from regulators, “How can I be sure that no one has been able to manipulate the data in question or modify application controls?” has caused headaches for many an auditor in PCAOB inspections and internal quality audits. In recent years, more guidance has emerged on how to deal with IPEs (Information Provided by the Entity, or in other words, how does the auditor determine that the auditee’s information is reliable?), the various layers in IT environments, interfaces, assurance reports in the audit and cybersecurity. So, what has this yielded in recent years?

Financial auditors and IT auditors are working better together and have a better understanding of each other’s fields. The audit methodologies of the various firms are making the role of the IT auditor increasingly clear. The new ISA 315 standard (“Identify and assess risks of material misstatement”) has also contributed to this. This standard includes extensive guidelines for gaining insight into information technology and general IT controls. Consultation on any deficiencies in the system of internal control, on the risk assessment of those deficiencies and on any compensating controls has improved. It also seems that the work to gain assurance on the effective operation of IT controls is increasing. This makes sense in our view because IT is becoming more complex and because there is always someone in the IT environment who can (or sometimes should be able to) circumvent controls anyway. Although the probability of occurrence is low, the impact can be significant. The challenge is to be able to assess these risks and determine the impact. Not many organizations are mature enough in terms of risk management to adequately mitigate these risks, nor are they able or willing to make the investments to do so. Only a select number of organizations remain where the IT auditor or financial auditor can perform an exclusively system-based audit within the IT domain. This realization leads in some cases back to substantive audits by the IT auditor or financial auditor, which completes the circle again between substantive and system-based audits.

What can we expect in the (near) future?


More data analysis is taking place right now. This will develop much further with all the relatively easier to access data. Consider the developments in centralized “data lakes”, for example. These contain a lot of the organization’s data (operational, financial, etc.), making analysis relatively easy. For large organizations, these data lakes are becoming too large and complex and there is a trend towards “data meshes”, a form of decentralized small(er) data lakes, reducing complexity (also in management and responsibility). Of course there are tools that can link and analyze multiple of these data meshes. In short, a great field for the data analyst (commonly called data scientist these days), both within an organization and with the financial auditor. A financial auditor’s wish to use data analysis to gain insight into the money and goods flow and (automated) analyses of the peculiarities in this money and goods flow could finally become a reality.

The question naturally arises if and when the complexity becomes so great that the financial auditor/IT auditor will start using other tools to still gain insight into the large amount of data available, both within the organization to be audited and beyond. In other words, how long will it be before the financial auditor and IT auditor together can start using AI applications themselves? Surely it would be ideal if AI software could perform the analyses, especially for the aforementioned analyses of anomalies in the money and goods flow. We expect that AI software can be a great help especially for gaining a good understanding of the nature and cause of deviations and the impact on the financial flow. This is particularly true in the current situation where data analytics has quite a bit of “fallout” and the financial auditor and/or IT auditor still has to incur significant costs to study the fallout and determine the impact. A current example of this is MindBridge Ai Auditor, with which KPMG has an alliance. MindBridge Ai Auditor supports data analytics through modern technologies and – using statistical analysis and machine learning based on a wide variety of data sets – identifies the risks per individual general ledger or income statement. This is needed to identify potential anomalies and deficiencies in financial records.


As indicated above, we see a bright future for substantive auditing. The question is whether there will still be a need for an assessment of the system of internal control. It is possible that the balance between the substantive audit with extensive data analyses on the one hand and the system-based audit with (limited) partial observations on the other hand will change. We believe that a certain degree of system-based audit will still be necessary to determine if the organization has taken a certain (minimum) level of control measures. A control approach of substantive auditing without the organization having a certain minimum level of internal control will provide greater uncertainty, in particular with regard to the quality (including completeness) of the data. This is something that substantive audits cannot determine or can only determine to a limited extent. Consider, for example, whether all data are actually in the records, or the “chiffre d’affaires,” as financial auditors so eloquently call it.

In addition, regulators want to maintain continuous pressure on organizations and their auditors to ensure that the system of internal control at organizations is and remains adequate and that the risk of discontinuity and fraud remains limited. The SOx legislation and the (mandatory) role of the financial auditor in this regard is a good example and is not expected to disappear any time soon. For the IT auditor, this means that system-based audits in the area of generic IT (support) processes and specific information systems still have to take place at at least a select number of large organizations, although this also applies in a less formal way to smaller organizations.

By now, we see organizations using AI applications in practice (e.g., insurance companies). The internal control of AI applications will require the IT auditor to have a better understanding of the design and operation of such AI applications. The fact that things are moving fast is evident from the various audit frameworks that have been published, including by the Association of Insurers, as well as IIA (Institute of Internal Auditors Netherlands) and various other organizations. NOREA’s (Dutch professional association for IT auditors) Algorithm & Assurance Knowledge Group has already published several frameworks.


Figure 3. Approach to the financial statement audit. [Click on the image for a larger image]

Broader role of the IT auditor

Recent years have shown that more is expected from the financial auditor than a “bare” financial statement audit. In particular, other laws and regulations, other than for the financial statement audit, are forcing organizations to include other information in the annual report, for example on the establishment and enforcement of privacy or information security/cybersecurity and ESG.

Although the European AI law is not yet in place, it is already clear that audits of products and services equipped with AI will fit into existing quality management systems of sectors such as logistics and healthcare. The Corporate Sustainability Reporting Directive (CSRD) will also broaden the role of the IT auditor. Starting in 2024, the first organizations have to start complying with these requirements. For now, only “limited assurance” is required, but it is expected that by the end of this decade “reasonable assurance” also needs to be provided for sustainability figures. Organizations are investing heavily to generate these figures. In this regard, reliability requirements play a role. The challenges may not be different from normal financial reporting chains, but there are specific areas of focus for ESG, partly because of the special areas of knowledge, but also because the reporting chains are still new and have never been subject to audit before. Also, employees in non-financial departments within organizations are less accustomed to strict compliance with regulations to be “in control,” with the risk of incomplete information and auditability.


The profession of IT auditors grew doing data file reviews and data analysis, leading the period when auditors primarily followed the substantive audit approach. When system-based auditing emerged in the 90s, later reinforced by SOx regulations, the focus of the IT auditor became less data-oriented and concentrated primarily on assessing programmed controls in financial reporting processes and the underlying generic IT management processes, such as change management and logical access.

Although the audit orientation is still system-based, there is clearly a revival of file searches/data analysis. Data are more approachable and data analysis tools are more powerful. System-based auditing is no longer seen as the holy grail.

We expect that the balance will again tip slightly toward data analysis, with more attention being paid, on the one hand, to encompassing overall controls (think of overall movement of cash and goods) and, on the other hand, especially to the (automated) analysis and risk assessment of the anomalies. A small dot of tools supported by AI is already shining on the horizon.

System-based auditing will not disappear because, on the one hand, it provides a good understanding of the organization and its processes and, on the other, it ensures the quality of the data captured during those processes. Where quality of IT processes is essential for internal controls in financial processes, quality in processes is essential for data analysis. This means that it’s not either system-based or substantive, but the best of both worlds. Those worlds are expanding as more and more topics other than purely financial statements are included in the annual report and the scope of the auditor. Most notable is ESG reporting, bringing new processes and data into scope.

In the 80s, a presentation by the Canadian Institute of Chartered Financial auditors (CICA) was frequently shown in the Netherlands. The gist was that in the magical year 2000, financial auditor Gene performed the annual audit by linking his “audit” computer with that of the auditee and the audit program did the rest. Miss Jane brought coffee (that was the way it was done in those days) and in the afternoon the results were discussed with the director of the audited organization.

In short, the future of the IT auditor in the context of the “financial statement” audit still needs a solid toolbox, but hopefully not like the punch card boxes and first draggable desktop computers which processed the data analyses. In the bottom two layers of the approach to the financial statement audit described earlier, a dual role of financial auditor knowledge and IT knowledge seems desirable, perhaps in an integrated profile of financial auditor and IT auditor. Although, given Gene’s example above, that will take longer than desired.


  1. Substantive versus system-based: in a substantive audit approach, the auditor obtains as much audit evidence as possible by selecting data and comparing it with external sources or by comparing it with other data already audited. This is often done on a sample basis. In a system-based audit approach, the auditor obtains audit evidence by assessing the adequacy of the system of internal controls in the processes and systems (design) and testing the operation of internal controls.


[Koed85] A.H.C. Koedijk (1985). Beoordeling betrouwbaarheid van een (geautomatiseerd) informatiesysteem: De CASA methode. Compact, 1985(4).

[NIVR88] NIVRA (1988). NIVRA-geschrift 44, Automatisering en Controle: Feitelijke Aanpak Systems Audit.

Celebrating fifty years of Compact and Digital Trust

On 7 June 2023, an event was hosted by KPMG to celebrate 50 years of Compact, in Amstelveen. Over 120 participants gathered to explore the challenges and opportunities surrounding Digital Trust. Together with Alexander Klöpping, journalist and tech entrepreneur, the event provided four interactive workshops, focusing on each topic of ESG, AI Algorithms, Digital Trust, and upcoming EU Data Acts, giving participants of various industries and organizations insights and take-aways for dealing with their digital challenges.


As Compact celebrated its fiftieth anniversary, the technology environment had experienced technology evolutions that people could never have imagined fifty years ago. Despite countless possibilities, the question of trust and data privacy has become more critical than ever. As ChatGPT represents a significant advancement in “understanding” and generating human-like texts and programming code, you would never be able to predict what could be possible by AI Algorithms in the next fifty years. We need to take actions on ethical considerations or controversies. With rapidly advancing technologies, how can people or organizations expect to protect their own interest or privacy in terms of Digital Trust?

Together with Alexander Klöpping, journalist and tech entrepreneur, the participants had an opportunity to embark on a journey to evaluate the past, improve the present and learn how to embrace the future of Digital Trust.

In this event recap, we will guide you through the event and workshop topics to share important take-aways from ESG, AI Algorithms, Digital Trust, and upcoming EU Data Acts workshops.


Foreseeing the Future of Digital Trust

Soon a personally written article like this could become a rare occasion as most texts might be AI-generated. That’s one of the predictions of the AI development shared by Alexander Klöpping, during his session “Future of Digital Trust”. Over the past few years, generative AI has experienced significant advancements which led to the revolutionary opportunities in creating and processing text, image, code, and other types of data. However, such rapid development is – besides all kinds of innovative opportunities – also associated with high risks when it comes to the reliability of AI-generated outputs and the security of sensitive data. Although there are many guardrails around Digital Trust which need to be put in place before we can adopt AI-generated outputs, Alexander’s talk suggested the possible advanced future of Artificial General Intelligence (AGI) which can learn, think, and output like humans with human-level intelligence.

Digital Trust is a crucial topic for the short-term future becoming a recurring theme in all areas from Sustainability to upcoming EU regulations on data, platforms and AI. Anticipated challenges and best practices were discussed during the interactive workshops with more than a hundred participants including C-level Management, Board members and Senior Management.


Workshop “Are you already in control of your ESG data?”

Together with KPMG speakers, the guest speaker Jurian Duijvestijn, Finance Director Sustainability of FrieslandCampina shared their exciting ESG journey in preparation of the Corporate Sustainability Reporting Directive (CSRD).

Sustainability reporting is moving from a scattered EU landscape to new mandatory European reporting standards. As shown in Figure 1, the European Sustainability Reporting Standards (ESRS) ) consists of twelve standards including ten topical standards for Environment, Social and Governance areas.


Figure 1. CSRD Standards. [Click on the image for a larger image]

CSRD requires companies to report on the impact of corporate activities on the environment and society, as well as the financial impact of sustainability matters on the company, consequently resulting in including an extensive amount of financial and non-financial metrics. The CSRD implementation will take place in phases, starting with the large companies already covered by the Non-Financial Reporting Directive and continuing with other large companies (FY25), SMEs (FY26) and non-EU parent companies (FY28). The required changes to the corporate reporting should be rapidly implemented to ensure a timely compliance, as the companies in scope of the first phase must already publish the reports in 2025 based on 2024 data. The integration of sustainability at all levels of the organization is essential for a smooth transition. As pointed out by the KPMG speakers, Vera Moll, Maurice op het Veld and Eelco Lambers, a sustainability framework should be incorporated in all critical business decisions, going beyond the corporate reporting and transforming business operations.

The interactive breakout activities confirmed that sustainability reporting adoption is a challenging task for many organizations due to the new KPIs, changes to calculation methodologies, low ESG data quality and tooling not fit for purpose. In line with the topic of the Compact celebration, the development of the required data flows depends on a trustworthy network of suppliers and development of strategic partnerships at the early stage of adoption.


CSRD is a reporting framework that could be used by companies to shape their strategy to become sustainable at all organizational and process levels. Most companies have already started to prepare for CSRD reporting, but anticipate a challenging project internally (data accessibility & quality) and externally (supply chains). While a lot of effort is required to ensure the timely readiness, the transition period also provides a unique opportunity to measure organizational performance from an ESG perspective and to transform in order to ensure that sustainability becomes an integral part of their brand story.

Workshop “Can your organization apply data analytics and AI safely and ethically?”

The quick rise of ChatGPT has sparked a major change. Every organization now needs to figure out how AI fits in, where it’s useful, and how to use it well. But using AI also brings up some major questions, for example in the field of AI ethics. Like, how much should you tell your customers if you used ChatGPT to help write a contract?

During the Responsible AI workshop, facilitators Marc van Meel and Frank van Praat, both from the KPMG’s Responsible AI unit, presented real-life examples that illustrate the challenges encountered when implementing AI. They introduced five important principles in which ethics dilemmas can surface: the Reliability, Resilience, Explainability, Accountability, and Fairness of AI systems (see Figure 2). Following the introduction of these principles and their subsequent elaborations, the workshop participants engaged in animated discussions, exploring a number of benefits and drawbacks associated with AI.


Figure 2. Unique challenges of AI. [Click on the image for a larger image]

To quantify those challenges of AI, there are three axis organizations can use: Complexity, Autonomy, and Impact (see Figure 3).


Figure 3. Three axis of quantifying AI risks. [Click on the image for a larger image]

Because ChatGPT was quite new when the workshop took place (and still is today), it was top of mind for everyone in the session. One issue that received substantial attention was how ChatGPT might affect privacy and company-sensitive information. It’s like being caught between two sides: on the one hand, you want to use this powerful technology and give your staff the freedom to use it too. On the other hand, you have to adhere to privacy rules and make sure your important company data remains confidential.

The discussion concluded stressing the importance of the so-called “human in the loop”, meaning it’s crucial that employees understand the risks of AI systems such as ChatGPT when using it and that some level of human intervention should be mandatory. Actually, it automatically led to another dilemma to consider, namely how to find the right balance between humans and machines (e.g. AI). Basically, everyone agreed that it depends on the specific AI context on how humans and AI should work together. One thing was clear; the challenges with AI are not just about the technology itself. The rules (e.g. privacy laws) and practical aspects (what is the AI actually doing) also matter significantly when we talk about AI and ethics.

There are upsides as well as downsides when working with AI. How do you deal with privacy-related documents that are uploaded to a (public) cloud platform with a Large-Language Model? What if you create a PowerPoint presentation from ChatGPT and decided not to tell your recipient/audience? There are many ethical dilemmas, such as lack of transparency of AI tools, discrimination due to misuses of AI, or Generative AI-specific concerns, such as intellectual property infringements.

However, ethical dilemmas are not the sole considerations. As shown in Figure 4, practical and legal considerations can also give rise to dilemmas in various ways.


Figure 4. Dilemmas in AI: balancing efficiency, compliance, and ethics. [Click on the image for a larger image]

The KPMG experts and participants agreed that it would be impossible just to block the use of this type of technology, but that it would be better to prepare employees, for instance by providing privacy training and use critical thinking to use Generative AI in a responsible manner. The key is to consider what type of AI provides added value/benefit as well as the associated cost of control.

After addressing the dilemmas, the workshop leaders concluded with some final questions and thoughts about responsible AI. People were interested in the biggest risks tied to AI, which match the five principles that were talked about earlier (see Figure 3). But the key lesson from the workshop was a bit different – using AI indeed involves balancing achievements and challenges, but opportunities should have priority over risks.

Workshop “How to achieve Digital Trust in practice?”

This workshop was based on KPMG’s recent work with the World Economic Forum (WEF) on Digital Trust and was presented by Professor Lam Kwok Yan (Executive Director, National Centre for Research in Digital Trust of the Nanyang Technological University, Singapore), Caroline Louveaux (Chief Privacy Officer of Mastercard) and Augustinus Mohn (KPMG). The workshop provided the background and elements of Digital Trust, trust technologies, and digital trust in practice followed by group discussions.


Figure 5. Framework for Digital Trust ([WEF22]). [Click on the image for a larger image]

The WEF Digital Trust decision-making framework can boost trust in the digital economy by enabling decision-makers to apply so-called Trust Technologies in practice. Organizations are expected to consider security, reliability, accountability, oversight, and the ethical and responsible use of technology. A group of major private and public sector organizations around the WEF (incl. Mastercard) is planning to operationalize the framework in order to achieve Digital Trust (see also [Mohn23]).

Professor Lam introduced how Singapore has been working to advance the scientific research capabilities of Trust Technology. In Singapore, the government saw the importance of Digital Trust and funded $50 million for the Digital Trust Centre, the national center of research in trust technology. While digitalization of the economy is important, data protection comes as an immediate concern. Concerns on distrust are creating opportunities in developing Trust Technologies. Trust Technology is not only aiming to identify which technologies can be used to enhance people’s trust, but also to define concrete functionality implementable for areas shown in Figures 6 and 7 as presented during the workshop.


Figure 6. Areas of opportunity in Trust Technology (source: Professor Lam Kwok Yan). [Click on the image for a larger image]


Figure 7. Examples of types of Trust Technologies (source: Professor Lam Kwok Yan). [Click on the image for a larger image]


Presentation by Professor Lam Kwok Yan (Nanyang Technological University), Helena Koning (Mastercard) and Augstinus Mohn (KPMG). [Click on the image for a larger image]

Helena Koning from Mastercard shared how Digital Trust is put in practice at Mastercard. One example was data analytics for fraud prevention. While designing this technology, Mastercard needed to consider several aspects in terms of Digital Trust. To accomplish designing AI-based technology, they made sure to apply a privacy guideline, performed “biased testing” for data accuracy, and addressed the auditability and transparency of AI tools. Another example was to help society with anonymized data while complying with data protection. When there were many refugees from Ukraine, Poland needed to analyze how many Ukrainians were currently in Warsaw. Mastercard supported this quest by anonymizing and analyzing the data. These could not have been achieved without suitable Trust Technologies.

In the discussion at the end of the workshop, further use cases for Trust Technology were discussed. Many of the participants had questions on how to utilize (personal) data while securing privacy. In many cases, technology cannot always solve such a problem entirely, therefore, policies and/or processes also need to be reviewed and addressed. For example, in the case of pandemic modeling for healthcare organizations, they enabled the modeling without using actual data to comply with privacy legislation. In another advertising case, cross-platform data analysis was enabled to satisfy customers, but the solution ensured that the data was not shared among competitors. The workshop also addressed that it is important to perform content labeling to detect original data and prevent fake information from spreading.

For organizations, it is important to build Digital Trust by identifying suitable technologies and ensuring good governance of the chosen technologies to realize their potential for themselves and society.

Workshop “How to anticipate upcoming EU Data regulations?”

KPMG specialists Manon van Rietschoten (IT Assurance & Advisory), Peter Kits (Tech Law) and Alette Horjus (Tech Law) discussed the upcoming data-related EU regulations. An interactive workshop explored the impact of upcoming EU Digital Single Market regulations on business processes, systems and controls.

The EU Data Strategy was introduced in 2020 to unlock the potential of data and establish a single European data-driven society. Using the principles of the Treaty on the Functioning of the European Union (TFEU), the Charter of Fundamental Rights of the EU (CFREU) and the General Data Protection Regulation (GDPR), the EU Data Strategy encompasses several key initiatives that collectively work towards achieving its overarching goals. Such initiatives include entering into partnerships, investing in infrastructure and education and increased regulatory oversight resulting in new EU laws and regulations pertaining to data. During the workshop, there was focus on the latter and the following regulations were highlighted:

  • The Data Act
  • The Data Governance Act
  • The ePrivacy Regulation
  • The Digital Market Act
  • The Digital Services Act and
  • The AI Act.


Figure 8. Formation of the EU Data Economy. [Click on the image for a larger image]

During the workshop participants also explored the innovative concept of EU data spaces. A data space, in the context of the EU Data Strategy, refers to a virtual environment or ecosystem that is designed to facilitate the sharing, exchange, and utilization of data within a specific industry such as healthcare, mobility, finance and agriculture. It is essentially a framework that brings together various stakeholders, including businesses, research institutions, governments, and other relevant entities, to collaborate and share data for mutual benefit while ensuring compliance with key regulations such as the GDPR.

The first EU Data Space – European Health Data Space (EHDS) – is expected to be operable in 2025. The impact of the introduction of the EU Data Spaces is significant and should not be underestimated – each Data Space has a separate regulation for sharing and using data.


Figure 9. European Data Spaces. [Click on the image for a larger image]

The changes required by organizations to ensure compliance with the new regulations pose a great challenge, but will create data-driven opportunities and stimulate data sharing. This workshop provided a platform for stakeholders to delve into the intricacies of newly introduced regulations and discuss the potential impact on data sharing, cross-sector collaboration, and innovation. There was ample discussion scrutinizing how the EU Data Strategy and the resulting regulations could and will reshape the data landscape, foster responsible AI, and bolster international data partnerships while safeguarding individual privacy and security.

Key questions posed by the workshop participants were the necessity of trust and the availability of technical standards in order to substantiate the requirements of the Data Act. In combination with the regulatory pressure, the anticipated challenges create a risk for companies to become compliant on paper (only). The discussions confirmed that trust is essential as security and privacy concerns were also voiced by the participants: “If data is out in the open, how does we inspire trust? Companies are already looking into ways not to have to share their data.”

In conclusion, the adoption of new digital EU Acts is an inevitable but interesting endeavor; however, companies should also focus on the opportunities. The new regulations require a change in vision, a strong partnership between organizations and a solid Risk & Control program.

In the next Compact edition, the workshop facilitators will dive deeper into the upcoming EU Acts.


The workshop sessions were followed by a panel discussion between the workshop leaders. The audience united in the view that the adoption of the latest developments in the area of Digital Trust require a significant effort from organizations. To embrace the opportunities, they need to keep an open mind while being proactive in mitigating the risks that may arise with technology advancements.

The successful event was concluded with a warm “thank you” to the three previous Editor-in-Chiefs of Compact who oversaw the magazine for half a century, highlighting how far Compact has come. Starting as an internal publication in the early seventies, Compact has become a leading magazine covering IT strategy, innovation, auditing, security/privacy/compliance and (digital) transformation topics with the ambition to continue for another fifty years.


Maurice op het Veld (ESG), Marc van Meel (AI), Augustinus Mohn (Digital Trust) and Manon van Rietschoten (EU Data Acts). [Click on the image for a larger image]


Editors-in-Chief (from left to right): Hans Donkers, Ronald Koorn and Dries Neisingh (Dick Steeman not included). [Click on the image for a larger image]


[Mohn23] Mohn, A. & Zielstra, A. (2023). A global framework for digital trust: KPMG and World Economic Forum team up to strengthen digital trust globally. Compact 2023(1). Retrieved from:

[WEF22] World Economic Forum (2022). Earning Digital Trust: Decision-Making for Trustworthy Technologies. Retrieved from:

How does new ESG regulation impact your control framework?

Clear and transparent disclosure on companies’ ESG commitments is continually becoming more important. Asset managers are increasing awareness for ESG and there is an opportunity to show how practices and policies are implemented that lead to a better environment and society. Furthermore, stakeholders (e.g., pension funds) are looking for accurate information in order to make meaningful decisions and to comply with relevant laws and regulations themselves. Reporting on ESG is no longer voluntary, as new and upcoming laws and regulation demand that asset managers report more extensively and more in dept on ESG. As a result of our KPMG yearly benchmark on Service Organization Control (hereinafter: “SOC”) Reports of asset managers, we are surprised that, given the growing interests and importance of ESG, only 7 out of 12 Dutch asset managers report on ESG, and still on a limited scope and scale.


Before we get into the benchmark we will give you some background on the upcoming ESG reporting requirements for the asset management sector. These reporting requirements are mainly related to the financial statement. However, we are convinced that clear policies, procedures as well as a functioning ESG control framework are desirable to reach compliance with these new regulations. Therefore, we benchmark to what extent asset managers are (already) reporting on ESG as part of their annual SOC reports (i.e., ISAE 3402 or Standard 3402). We end with a conclusion and a future outlook.

Reporting on ESG

In this section we will provide you with an overview of the most important and relevant regulations on ESG for the asset management sector. Most of the ESG regulation is initiated by the European Parliament and Commission. We therefore start with the basis, the EU taxonomy, which we disclose high-over followed by more in detail regulations like Sustainable Finance Disclosure Regulations (hereinafter: “SFDR”) and Corporate Sustainability Reporting Directive (hereinafter: “CSRD”).

EU Taxonomy

In order to meet the overall EU’s climate and energy targets and objectives of the European Green deal in 2030, there is an increasing need for a common language within the EU countries and a clear definition of “sustainable” ([EC23]). The European Commission has recognized this need and has taken a significant step by introducing the EU taxonomy. This classification system, operational since July 12th, 2022, is designed to address six environmental objectives and plays a crucial role in advancing the EU’s sustainability agenda:

  1. Climate change mitigation
  2. Climate change adaptation
  3. The sustainable use and protection of water and marine resources
  4. The transition to a circular economy
  5. Pollution prevention and control
  6. The protection and restoration of biodiversity and ecosystems

The EU taxonomy is a tool that helps companies disclose their sustainable economic activities and helps (potential) investors understand whether the companies’ economic activities are environmentally and socially governed sustainable or not.

According to EU regulations, companies with over 500 employees during the financial year and operating within the EU are required to file an annual report on their compliance with all six environmental objectives on 1 January of each year, starting from 1 January 2023. The EU ESG taxonomy report serves as a tool for companies to demonstrate their commitment to sustainable practices and to provide transparency on their environmental and social impacts. The annual filing deadline is intended to ensure that companies are regularly assessing and updating their sustainable practices in order to meet the criteria outlined in the EU’s ESG taxonomy. Failure to file the report in a timely manner may result in penalties and non-compliance with EU regulations. It is important for companies to stay informed and up-to-date on the EU’s ESG taxonomy requirements to ensure compliance and maintain a commitment to sustainability.


The SFDR was introduced by the European Commission alongside the EU Taxonomy and requires asset managers to disclose how sustainability risks are assessed as part of the investment process. The EU’s SFDR regulatory technical standards (RTS) came into effect on 1 January 2023. These standards aim to promote transparency and accountability in sustainable finance by requiring companies to disclose information on the sustainability risks and opportunities associated with their products and services. The SFDR RTS also establish criteria for determining which products and services can be considered as sustainable investments.

There are several key dates that companies operating within the EU need to be aware of in relation to the SFDR RTS. Firstly, the RTS is officially applied as of 1 January 2023. Secondly, companies are required to disclose information on their products and services in accordance with the RTS as of 30 June 2023. Lastly, companies will be required to disclose information on their products and services in accordance with the RTS in their annual financial reports as of 30 June 2024.

It is important for companies to take note of these dates as compliance with the SFDR RTS and adhering to the specified deadlines is crucial for companies. Failure to do so may again result in penalties and non-compliance with EU regulations. Companies should also stay informed and keep up with the SFDR RTS requirements to ensure that they are providing accurate and relevant information to investors and other stakeholders on the sustainability of their products and services as these companies are required to disclose part of this information as well.


The CSRD is active as of 5 January 2023. This new directive strengthens the rules and guidelines regarding the social and environmental information that companies have to disclose. In time, these rules will ensure that stakeholders and (potential) investors have access to validated (complete and accurate) ESG information in the entire chain (see Figure 1). In addition, the new rules will also positively influence the company’s environmental activities and drive competitive advantage.


Figure 1. Data flow aggregation. [Click on the image for a larger image]

Most of the EU’s largest (listed) companies have to apply these new CSRD rules in FY2024, for the reports published in 2025. The CSRD will make it mandatory for companies to have their non-financial (sustainable) information audited. The European Commission has proposed to first start with limited assurance upon the CSRD requirements in 2024. This represents a significant advantage for companies as limited assurance is less time consuming and costly and will give great insights in the current maturity levels. In addition, the Type I assurance report (i.e., design and implementation of controls) can be used as a guideline to improve and extend the current measures to finally comply with the CSRD rules. We expect that the European Commission will demand a reasonable assurance report as of 2026. Currently, the European Commission is assessing which Audit standard will be used as the reporting guideline.

Specific requirement for the asset management sector

In 2023 the European Sustainability Reporting Standards (ESRS) will be published in draft by the European Financial Reporting Advisory Group (hereinafter: “EFRAG”) Project Task Force for the sectors Coal and Mining, Oil and Gas, Listed Small Medium Enterprises, Agriculture, Farming and Fishing, and Road Transport ([KPMG23]). The classification of the different sectors is based on the European Classification of Economic Activities. The sector-specific standards for financial institutions, which will be applicable for asset managers, are expected to be released in 2024, although the European Central Bank and the European Banking Authority both argue that the specific standards for financial institutions is a matter of top priority due to the driving force of the sector regarding the transition of the other sectors to a sustainable economy ([ICAE23]). We therefore propose that financial institutions start analyzing the mandatory and voluntary CSRD reporting requirements and determine – based on a gap-analysis – which information they already have versus what is missing and start working on that. 

Reporting on internal controls

European ESG regulation focusses on ESG information in external reporting. However, no formal requirements are set (yet) regarding the ESG information and data processes itself. In order to achieve high-quality external reporting, control over internal processes is required. Furthermore, asset managers are also responsible for the processes performed by third parties, e.g., the data input received from third parties. It is therefore important for an asset manager to gain insight in the level of maturity of the controls on these processes as well.

Controls should cover the main risk of an asset manager that can be categorized a follows:

  • Inaccurate data
  • Incomplete data
  • Fraud (greenwashing)
  • Subjective/inaccurate information
  • Different/unaligned definitions for KPIs

In order to comply with the regulations outlined in Figure 1, it is recommended to include the full scope of ESG processes in the current SOC reports of asset managers. Originally, the SOC report is designed for providing assurance on processes related to financial reporting over historical data. In our current society, we observe that more and more attention is paid to non-financial processes. We see that the users of the SOC reports are also requesting and requiring assurance over more and more non-financial reporting processes. We observe that some asset managers are including processes such as Compliance (more relevant for ISAE3000A), Complaints and ESG in their SOC reports. KPMG performed a benchmark on which processes are currently included in the SOC reports of asset managers. We will discuss the results in the next paragraph.


By comparing 12 asset management SOC reports for 2022, KPMG observed that 6 out of 12 asset managers are including ESG in their system descriptions (description of the organization), and 7 out of 12 asset managers have implemented some ESG controls in the following processes:

  • Trade restrictions (7 out of 12 asset managers)
  • Voting policy (4 out of 12 asset managers)
  • Explicit control on external managers (4 out of 12 asset managers)
  • Emission goals / ESG scores (1 out of 12 asset managers)
  • Outsourcing (0 out of 12 asset managers)

We observe that reporting is currently mostly related to governance components. There is little to no reporting on environmental and social components. In addition, we observe that none of the twelve asset managers report on or mention third party ESG data in their SOC reports.

We conclude that ESG information is not (yet) structurally included in the assurance reports. This does not mean that ESG processes are not controlled; companies can have internal controls in place that are not part of a SOC report. In our discussion with users of the assurance reports (e.g. pension funds) we get feedback that external reporting on ESG related controls is perceived as valuable given the importance of sustainable investing and upcoming (EU) regulations. Based on our combined insight from both ESG Assurance and advisory perspective we will share our vision on how to report on ESG in the next paragraph.

Conclusion and future outlook

In this article we conclude that only 7 out of 12 asset managers are currently reporting on ESG-related controls in their SOC reports, and still on a limited scope and scale. This is not in line with the risks and opportunities associated with ESG data and not in line with active and upcoming laws and regulations. We therefore recommend that asset managers enhance control on ESG by:

  • implementing ESG controls as part of internal control framework (internal reporting);
  • implementing ESG controls as part of their SOC framework (external reporting);
  • assessing and analyzing with your external (data) service providers and relevant third parties regarding missing controls on ESG.

The design of a proper ESG control framework first starts with a risk assessment and the identification of opportunities. Secondly, policies, procedures and controls should be put in place to cover the identified material risks. These risks need to be mitigated in the entire chain, which means that transparency within the chain and frequent contact among the stakeholders is required. The COSO model (commonly used within the financial sector) could be used as a starting point for a first risk assessment, where we identify inaccurate data, incomplete data, fraud, inaccurate information and unaligned definition of KPIs as key risks. Lastly, the risks and controls should be incorporated within the organizational annual risk cycle, to ensure quality, relevancy, and completeness. Please refer to Figure 2 as an example.


Figure 2. Example: top risks x COSO x stakeholder data chain [Click on the image for a larger image]


[EC23] European Commission (2023, January 23). EU taxonomy for sustainable activities. Retrieved from:

[ICAE23] [ICAE23] ICAEW Insights (2023, May 3). ECB urges priority introduction of ESRS for financial sector. Retrieved from:

[KPMG23] KPMG (2023, April). Get ready for the Corporate Sustainability Reporting Directive. Retrieved from:

Automation and IT Audit

The introduction of IT in organizations was rather turbulent. Under the motto “First there was nothing, then the bicycle came”, everyone had to learn to deal with the effects of IT on people and processes. This article describes the development of IT and the role of auditors related to the investigation into the quality of the internal control on behalf of the financial audit. IT-related laws and regulations are discussed as well as KPMG’s involvement with professional organizations and university study programs.

For the Dutch version of this article, see: Automatisering en IT-audit

The beginning of IT

Since 1965, computers were introduced in organizations, mostly for simple administrative applications. In this period the accountancy profession had a slight wake-up call when during control work at audit clients (assistant) auditors desperately mentioned: “They bought a computer.” At that time, there was no IT organization yet whatsoever.

Start automation with batch processing

Simple processes were automated and processed by computers (that meanwhile had been introduced by IBM, BULL, and UNIVAC) that could only perform one process at a time. The responsibility for automation lay almost always with the administrative function of the organization.

The required programs were written in programming languages such as assembler or COBOL. The elaboration of the required functionality occurred on pre-printed forms after which the programmers themselves had to record the instructions on punch cards. These large quantities of punch cards were read in the computer center and recorded on magnet tapes, after which processing took place. The output was printed on paper. The same process was used for the processing of mostly administrative data. The computer was controlled through the so-called Job Control Language (JCL). Computer operators could initiate operations with the aid of these JCL processes.

In time, complexity grew and the expert in the area of computer control programs – the systems programmer – entered the scene. Both the quality and effectiveness of the internal control measures within organizations were pressured as this new function of system programming could manipulate the results of processing out of sight of the internal organization.

The accountancy profession quickly acknowledged that automation could be of influence on the quality of the system of internal control measures within organizations. Already in 1970, the Netherlands Institute of Register Accountants (NIVRA1), issues Publication number 1 with as its title Influence of the administrative automation on the internal control. That same year the Canadian Institute of Chartered Accountants issues the book Computer Control Guidelines, followed in 1974 by Computer Audit Guidelines. In 1975, the NIvRA Publication 13 follows: The influence of automated data processing on the audit.

Use of the computer in the audit

It was a local step for an auditor to quickly choose to use the client’s computer or an in-house computer to provide the client with the required information on behalf of the audit. Standard packages such as AudiTape were marketed. Within KPMG, a department was created in 1971 entitled Automation & Control Group with programmers that ensures that the audit control practice was fully equipped. Next to the much-used statistical “currency ranking” sampling method, a better method is developed, called the sieve method.

Needless to say, it was stressed that there is a need for the audit client to attend the processing of the software developed by auditors or standard audit software used.

The development of the COMBI tool (Cobol Oriented Missing Branch Indicator) within KPMG offers the possibility using test cases to acknowledge the “untouched branches” in a program, which can be applied efficiently during the development phase of programs.

Foundation of KPMG EDP Auditors

After a short start-up phase of a few years, in which specialized accountants used audit software on computers at audit clients, the Automation & Control (AC) group was established in the period 1971-1973. This group consisted of financial auditors with IT affinity (which were trained and rotated every three years) and programmers for the development of queries and audit software, such as the abovementioned COMBI.

In 1974, it was decided to establish a separate organizational unit entitled KPMG EDP Auditors (KEA, hereinafter KPMG). The attention of the auditor moved to engaging (IT Audit) experts, who had to establish whether the embedded system of measures of internal control in the organization was also anchored in the configuration of the information system. This also happened with respect to acquiring certainty that application programs developed by/under responsibility of the user organization would be processed unchanged and in continuity.

Specialized knowledge of the auditor’s organization was required due to the complexity arising from the introduction of large computer systems with online/real-time facilities, database management systems and standard access control software (COTS, Commercial-Off-The-Shelf). After all, the organization must be able to identify the impact that this new technology will have on internal controls and be able to recognize the implications for the auditor’s work.

It is in that context that in 1974 it was decided to issue a professional journal entitled Compact (COMPuter and ACcounTant). Initially, it was primarily intended to inform the (financial) audit practice, but was increasingly highly appreciated by other organizations, mainly audit clients.

Introduction of complex computer systems

Since 1974, the application and usage of computers accelerated as the new computers could perform multiple tasks simultaneously. In addition, an infrastructure was created that allowed the user organization to collect data directly. The IT organization therefore became an independent organizational unit and was usually positioned within the Finance hierarchy.

IBM introduced the operating system MVS (Multiple Virtual Systems) and shortly after (1975) software under the collective name of Data Base Management Systems (DBMS) was marketed. The emphasis of IT applications was placed on on-line/real-time functionality. Other computer suppliers introduced similar systems.

The efforts of auditors aimed at assessing the quality aspects of automation initially focused mainly on assessing the measures of physical security of the computer center and the availability of a tested emergency plan.

When the development of batch environment to online/real-time environment continued, the importance of logical security, as well as quality of procedures, directives and measures in the automation organization came to the fore. Think of the arrangement of access control; back-up, and recovery procedures; test, acceptance and transfer procedures of application software; library management, etc.

The introduction of complex computer systems not only meant a migration from classically organized data to a new IT environment, but also a migration of control measures to higher software layers (access control systems, sub schemes within DBMS’s). The entire data conversion project from the classical IT environment to on-line/real-time necessitated a sound planning of the conversion, define phases, set-up procedures for data cleansing, determining completeness and correctness of data collection and security measures during the entire project.

Many Compact issues have discussed this complexity of the above-mentioned profound events and the impact on the internal and financial audits.

Minicomputer systems

More or less simultaneous with the introduction of large complex mainframe computer systems, smaller computer systems were introduced. As these became bigger, they were called mid-range computers.

For the KPMG organization this meant further specialization, as the introduction of minicomputer systems in SME organizations usually had different consequences for the design of the system of internal controls and for the security measures to be taken in these organizations.

KPMG authors successively published articles in Compact with subjects such as the reliability of administrative data processing with minicomputers; the decentral use of small computers: a new problem, and also: the influence of the implementation of small-scale automation on the audit.

Newer versions of mid-range computer systems have a more complex architecture which enables better possibilities for realizing a securely operating IT organization. Especially the security options for the popular IBM AS/400 system were extensively published.

In addition, the security of PC systems and end-user computing was addressed. A Compact Special was entirely devoted to the manageability and security due to the increasing use of PC networks (including risks, methods, audit programs and tooling).

Auditing of information systems

Atypical of automation is that for data processing, the development of hardware and supporting software occurs (almost) simultaneously. Since the beginning of the seventies, auditor, and more specifically the experts specialized in IT auditing, also focused on audit information systems to verify whether internal control measures were not only correctly configured in application programs, but also in the underlying operating system software.

A research methodology entitled “System Assessment approach & financial audit” was developed early on in The Netherlands and periodically updated further to frequent usage. Later – in 1998 – this methodology was followed up by the internationally adopted Business Process Analysis (BPA) method.

The rapid increase of electronic payments and the mapping of its consequences for the manageability as a result of these new developments should be mentioned as well as the discussions about the use of encryption and possible consequences of legislation.

The quality of the system development organization was also investigated. This development ultimately led to Enterprise Resource Planning (ERP) systems as a further integration of applications occurred, and subsequently, to improve the control of ERP Projects, Quality Assurance measures were introduced. In literature, both underexposed management aspects were discussed at ERP implementations and the complexity of defining and implementing approvals.

E-business advances rapidly too. Electronic payments on the Internet become more or less self-evident, e-mail conventions were developed. Assessing the critical infrastructural security mechanisms, such as Public Key Infrastructure (PKI), to which end a national framework of audit standards and procedure needed to be developed, became important to IT Auditors. The KPMG PKI standards framework was later adopted internationally in the WebTrust standard. Above all, KPMG focused on the assessment of risk management and E-business environments.

Information Security Policy

Information Security has been in the spotlight ever since the start of the automation of data processing. Since the early eighties, the subjects of organizational security, logical security, and physical security (see Figure 1), as well as back-up, restart and recovery and fallback options were considered in conjunction.


Figure 1. Class about Information Security (Dries Neisingh at the University of Groningen). [Click on the image for a larger image]

In the 90s, the Information Security policy was highlighted as a sound foundation for information protection. Since then, many KPMG authors have shared their knowledge and experience of Information Security in almost all Compact volumes.

Artificial Intelligence and Knowledge Based Systems

At the beginning of the eighties, an investigation was started within KPMG examining the possibilities of the use of Artificial Intelligence (AI) in the audit. In 1985 “Knowledge-Based Systems (KBS): a step forward into the controllability of administrative processes” was introduced as a result of, amongst others, the developments in AI, higher processing speeds and larger memories. The KBS software does not contain human-readable knowledge but merely algorithms that can perform (in the rule-base) processes based upon externally stored knowledge.

In the following years, there were new developments, as evidenced by publications on Structured Knowledge Engineering (SKE), developed by Bolesian Systems. Further to the above, KPMG published about “Control software and numerical analysis” and about “Testing as a control technique”.

Microcomputer in the audit

After the successful growth of the use of computers in the (financial) audit, the attention partly spread to the use of the microcomputer in the audit. In 1983, an automated planning system became operational. Subsequently, a self-developed audit package was demonstrated with which file researches could be executed.

The use of the micro in organizations to support administrative information processing was extensively published, as well as its use at the auditor as a control tool. The micro was therefore used both as stand-alone and as part of a network.

Within KPMG, two projects were being started, notably the development of software for connecting the computer of the audit client with the auditor’s micro and the development of control programs for processing on the auditor’s micro. KPMG’s Software Engineering department researches software engineering, operating systems (e.g. UNIX), computer viruses, electronic payment, and the debit card.

IT outsourcing

Organizational scale and/or financial capacity sometimes mean that automated data processing was being outsourced to computer/IT service organizations that usually make use of available standard packages on the market. Especially IT outsourcing grew rapidly in the nineties and early this century.

Jointly founded IT organizations – as a shared service center – arise as well. An example is the founding of six computer centers spread across the country on behalf of healthcare insurers administrations. Each health care insurer uses the same functionality and was on-line/real-time connected to one of the regional computer centers. From the start of this special cooperation, KPMG was involved as IT Auditor for overall quality assurance. Several opinions were issued on the quality aspects of the newly established IT organization, as well as the effective operation in continuity of these organizations and on the automated business rules and controls of the software. After all, the health insurance funds themselves carried out the user controls.

NBA publication 26 entitled Communications by the auditor related to the reliability and continuity of automated data processing, paid attention to these problems. Later, Publication 53 was published regarding Quality opinions on information services. In practice these were named Third Party Statements (TPMs).

IT-related laws and regulations

Inspection and certification of IT

Since the beginning of the 80s, the subject of IT inspections and certifications regularly popped up on the agenda. The foundation “Institute to Promote Inspection and Certification in the Information Technology”2 was established. The Netherlands Standardization Institute (NNI3) was already working on a standard for the quality system for software. Within KPMG, much attention was paid to the possibility of issuing opinions on software quality systems, but also for the certifying of software and development systems. Compact, for instance, published widely on the issues at hand.

Finally, the foundation KPMG Certification was established. In January 1998, it officially received the charter “Accreditation of BS 7799 Certification”4, because by the end of 1997, ICS (International Card Services, now part of ABN AMRO Bank) had received the first Dutch certificate for this international Information Security standard.

In November 2002, the above accreditation of KPMG Certification was followed by the first accreditation and certification of PinkRoccade Megaplex (now part of telecommunications company KPN) for the certification scheme “Framework for certification of Certification Authorities against ETSI TS 101456”. This refers to the servicing of digital certificates for making use of digital IDs and signatures. Today, this is comparable to the eIDAS certification.

Memorandum DNB

The Memorandum “Reliability and continuity of automated data processing in banking” published by the Dutch Central Bank (DNB) in 1988 was in itself no revelation. Since the start of KPMG’s IT Audit department, specialized IT Auditors were deployed in the audit practice of financial institutions, related to the assessment of internal controls in and around critical application software, and measures taken in the IT organization.

Various Compact issues show that the IT Audit involvement was profound and varied. My oration at the University of Groningen in 1991 entitled “There are banks and banks” critically contemplates this Memorandum.

It is worth noting that in April 2001, the DNB presented the Regulation Organization and Control (ROB), in which previous memoranda such as the one regarding outsourcing of automated data processing were incorporated.

Computer crime

Since the mid 70s, publications under the heading “Computer Abuse” increasingly appear. Several “abuse types” were subsequently described in Compact. The subject remains current.

In November 1985, the Minister for Justice installs the Commission “Information technology and Criminal Law” under the presidency of Prof. H. Franken. KPMG was assigned by this commission to set up and perform a national survey among business and government to acquire insight (anonymously) in the quality and adequacy of internal controls and of security controls in IT organizations.

In the report that appears in 1987, the image sketched about the quality and effectiveness of the measures taken was far from reassuring, both in small and in large IT environments. The committee’s conclusion was therefore (all things considered) to make criminalization of computer-related crime applicable, taking into account the findings presented, if there were “breaches in secured work”.


The creation of laws and regulations regarding privacy (as the protection of personal data) has a long history. At the end of 1979, the public domain was informed in a symposium called “Information and automation”, which focused on the imminent national and international legislation regarding data protection, privacy protection and international information transport.

Subsequently, Compact was being used as an effective medium for employees and especially clients and relations to inform them on developments. In cooperation with the then Data Protection Authority5 a “New brochure on privacy protection” was issued by KPMG further to the Data Protection Act (Wpr) being enacted in 1988. Especially since 1991 there were many publications on privacy authored by KPMG employees. KPMG also conducted the first formal privacy audit in the Netherlands together with this privacy regulator.

In 2001, the new Dutch Data Protection Act (Wbp) replaced the Wpr – due to the EU Data Protection Directive 95/46/EC. At that time, an updated Privacy Audit Framework was also introduced by a partnership of the privacy regulator with representatives from some public and private IT auditing practices, including KPMG.

An interview published in Compact 2002/4 with the chairman of the Board entitled “Auditor as logical performer of Privacy audits. Personal Data Protection Board drives Certification”.

Transborder data flow

Already in 1984, an investigation was performed into the nature and scope of data traffic crossing borders and especially into the problems and legislations in several countries.

In 1987, KPMG and the Free University of Brussels published the book entitled Transborder flow of personal data; a survey of some legal restrictions on the free flow of data across national borders. The document consisted of an extensive description per country of the relevant national legislation, from Australia to Switzerland and the OECD Guideline. It discussed the legal protection of personal data, the need for privacy principles and the impact of national and international data protection laws on private organizations.


The use of encryption rapidly increased partly because of the introduction of (international) payment systems. Other applications of encryption also took place, such as external storage of data devices. In 1984, the Ministry of Justice considered initiating a licensing system for the use of encryption in case of data communication. The granting of such licenses should also be applied in the case of the use of PCs, whether or not included in a network.

Partly further to the outcome of KPMG’s investigation “Business Impact Assessment cryptography” and pressure from the business community, any form of encryption regulation was refrained from.

Legal services

Expanding KPMG product portfolio with legal IT services was a logical consequence of above developments, which occurred since 1990 with the recruitment of lawyers with IT and Information specialization. The regulatory developments not only referred to the above-mentioned legal subjects, but also to the assessment and advise of contracts for the purchase of hardware, software packages and to the purchase of software development as well as escrow (source code depository), dispute resolution, with probative values of computer materials and copyright, etc.

The Compact Special 1990/4 was already entirely devoted to the legal aspects of automation. In 1993, due to the developments in IT and law, KPMG published a book entitled 20 on Information Technology and law. KPMG authors and leading external authors contributed articles. In 1998, one of the KPMG IT lawyers obtained her doctorate with her PhD thesis Rightfully a TTP! An investigation into legal models for a Trusted Third Party. The legal issues had and still have many faces and remain an important part of servicing clients.

IT Audit and Financial Audit

The relationship between the IT Audit and the Financial Audit practice has been strengthened over the years. As organizations started using IT more intensively in (all) business processes the meaning of anchoring the internal control and security measures in the IT environment became inevitable. Determining that the system of internal controls in organizations was anchored in continuity in the IT environment required employing IT Audit expertise. Initially, the IT Audit was supporting the audit; however, the influence and meaning of the use of IT in organizations became so immense that seemingly solely employees with both a RE and RA qualification would be capable to perform such an audit.

The publication of the Compact article 2001/3 entitled “Undivided responsibility RA for discussion: IT-auditor (finally) recognized” dropped a bombshell within the accountancy profession. Many KPMG authors published leading articles on the problem. The subject had already been considered in 1977 with the publication of the article “Management, Electronic information processing and EIV – auditor”. “Auditor – Continuity – Automation and risk analysis” is extensively covered in 1981. From 1983 onwards, articles on audit and ICT (Information and Communication Technology) were published quite regularly.

In recent years, Compact has explored this decades-long relationship between IT auditing and financial auditing in several articles, such as in Compact Special 2019/4 “Digital Auditing & Beyond”. In this Compact issue, the article by Peter van Toledo and Herman van Gils addresses this decades-long relationship.

The broadened field of IT Audit(or)

Over the years, it became clear that the quality on the general internal, security, and continuity controls significantly affected the possibility to control the IT organization and with it the entire organization and its financial audit. Subsequently, the effectiveness and efficiency of the general IT controls system attracted in-dept attention.

From the 80s onwards, KPMG’s Board decided to broaden the service offering by also employing (system) programmers next to auditors with IT expertise, as well as administrative information experts, computer engineers and the like. And finally, even (IT) lawyers. Consequently, a wide range of services arose. The KPMG organization’s pioneering role within the industry also served as a model for the creation of the professional body NOREA.

As the integration of ICT continued to take shape, a further expansion of knowledge and services in that direction took place. Some employees obtained their PhD (i.c. promotion to dr.) or an additional legal degree (L.LM), and some even became (associate) professor.

The Chartered Accountants associated with KPMG all were a member of the NIVRA, now the NBA. The activities employed in this organization on behalf of the audit practice were mentioned before. It took quite some time before, in addition to NIVRA, the professional association of EDP Auditors would be established (1992). The admission requirement of membership of the Netherlands Order of Chartered EDP Auditors (NOREA) was to have completed a two- or three-year EDP Audit study at one of the three universities that offered this new degree. Of course there were transitional arrangements for those who had proven knowledge, expertise, and experience. Like NiVRA, a Board of Discipline was installed at NOREA.

Within NiVRA there was much interest in the development of IT and its consequences for the financial audit. However, the expertise as far as IT was concerned, was initially mostly concentrated in the Netherlands Society for Computer Science of IT professionals (NGI6), in which KPMG played an important role in various working groups such as “Policy and risk analysis”, “Physical security and fall-back”, “Security supported by application software”, “architecture”, “Privacy protection and EDP Audit”.

University studies

A prominent practice like KPMG has a mission to also provide a stimulus to research and education in the field. Therefore, KPMG has made an important contribution over the years to university education in the area of both EDP Auditing and on the influence of the use of ICT on the control of organizations and on the financial audit.

It meant the development of an EDP Audit study program and on the other hand the setting up of new university chairs / professorships in the area of IT Audit and administrative organizations.

  • Already in 1977, Dick Steeman was appointed at the Erasmus University Rotterdam. Steeman took office as extraordinary professor with pronouncing the public lesson “Management, Electronic information processing and EIV-auditor”.
  • In 1990, Dries Neisingh was appointed professor at the University of Groningen, department Accountancy, chair “reliability aspects automated information systems”. The speech’s subject was “the Memorandum regarding reliability and continuity of automated data processing in banking (Memorandum DNB): there are banks and banks”.
  • At the beginning of 1991, the appointments of professor EDP Auditing at the Erasmus Universiteit of Cor Kocks and Hansen Moonen at the Tilburg University followed.
  • In 1994, professor Ronald Paans joined KPMG. He already was a professor EDP Auditing at the VU Amsterdam (Free University).
  • In 2002, dr. Edo Roos Lindgreen was appointed professor “IT in Auditing” at the University of Amsterdam. In 2017 he was appointed professor “Data Science in Auditing”.
  • In 2004, dr. Rob Fijneman became professor IT Auditing at Tilburg University.

Figure 2 shows the management of KPMG’s IT Audit practice in 1987 with some of the above-mentioned people.


Figure 2. Management of KPMG’s IT Audit practice upon retirement of Han Urbanus (in 1986). From left to right Dick Steeman, Dries Neisingh, Hans Moonen, Tony Leach, Han Urbanus and his wife, Jaap Hekkelman (chairman of NGI Security), Cor Kocks and Herman Roos. Han Urbanus and Dick Steeman jointly founded KPMG EDP Auditors and started Compact magazine. [Click on the image for a larger image]


The introduction of Compact in April 1974 was an important initiative of KPMG’s IT Audit Board. The intention was to publish an internal publication on IT subjects on a regular basis. The standard layout became primarily one or a few technical publications, IT developments, ABC News (Automation, Security, and Control), new books and articles included in the library and finally “readers comments”. In the first years, ABC News articles were frequently drawn from EDPACS7 magazine and other international publications.

The first issue started with the article “the organization of testing” and a contemplative article about “software for the benefit of the financial audit: an evaluation”. In the second issue, the first article was continued with subjects such as test monitoring, acceptance tests and system implementation.

Over the years, Compact becomes increasingly widespread: both clients and potential clients appear highly satisfied with the quality of the articles and the variety of subjects. Compact developed into a professional technical magazine! The authors were KPMG employees with occasionally contributions of external authors.

Since 1983, articles regularly addressed the relationship between Audit and IT. In Compact Specials, the editorial staff renders the meaning of such a special issue: “as usual every year a Special appears on audit and IT Audit. In the meantime, it has become habitual to confront CPAs and Register EDP Auditors (RAs, REs and RE RAs) with the often-necessary deployment of EDP Auditors in the financial audit practice after the completion of the audit of the financial statements and after the cleaning of files and prior to the layout of files for the new year”.

On the occasion of 12.5 years of Compact on Automation & Control, the book 24 about EDP Auditing was published in 1986. The book contained a bundle of updated articles from Compact, written by 24 authors. The preface started with a quote by Benjamin Disraeli: “the best way to become familiar with a subject is to write a book about it”.

Increasingly, Compact Special issues were published. In 1989, a Special appeared on “Security” and in 1990 on “The meaning of EDP Auditing for the Financial auditor”. Five external authors from the business also contributed to this Special also as well as a public prosecutor and Prof. mr. H. Franken.

In the run up to the 21st century, it became rapidly clear for many organizations and more especially for the IT sector as well as for EDP Auditors, that problems would definitely arise at the processing of data by application software as a result of the millennium change. Compact became an important medium to attract attention to this both internally and externally. Compact 2000/1 looks back with the article “Across the threshold of the year 2000, beyond the millennium problem?”.

The anniversary issue 25 years of Compact appeared in 1999/2000. Of the 57 authors 50 were employed by KPMG in various functions as well as seven external authors (among them a few former employees). It was a dashing exploit: a publication of 336 pages with 44 articles. The introductory article was called “From automation and control to IT Audit”. In the article “essential assurance over IT” largely goes through the clusters of articles.

Barely recovered from the millennium problem, the introduction of the euro presented itself. In Compact 2000/1 attention was paid to the introduction of the euro entitled “and now (again) it is time for the euro”. The Compact issues 2000/5 and 2000/6 were entirely devoted to all aspects of the conversion to the euro. Articles were being published under the header: “Euro conversion: a sound preparation is not stopping at half measures” and “Implement the euro step by step: drawing up a roadmap for the transition”. And: “Validating the euro conversion”, “Euro emergency scenarios” and “Review of euro projects”.


In the thirty years that were briefly reflected in this article, a lot has happened in the area of the development and application of IT in business and government. For (financial) auditors, it was not easy to operate in this rapidly changing environment. Training courses were not available, and knowledge was sparsely present within or outside the organization.

KPMG has taken the lead to making problems accessible for accountants by the creation of KPMG EDP Auditors and the simultaneous start of publishing Compact magazine. In addition to that, next to auditors, different types of IT professionals were also recruited. Many are to be thanked (the promoters and the successive generations) for the fact that with the start of KPMG EDP Auditors and the broadening of knowledge areas, the emerging market demand could be served adequately. KPMG has timely facilitated that sufficient time and investments could be leveraged for education and product development; this is why KPMG EDP Auditors could lead the way in the market.

The thirty years (1971-2002) have flown by. A period in which many have contributed and can look back with satisfaction. This is especially true for the author of this article who has summarized an earlier article of almost sixty pages.


  1. Currently, the Royal Netherlands Institute of Chartered Accountants (NBA).
  2. Original name: “Stichting Instituut ter bevordering van de keuring en Certificatie in de Informatie Technologie (ICIT)”.
  3. Currently the Netherlands Standardization Institute is named NEN: NEtherlands Norm.
  4. Currently known as ISO 27001 accreditation.
  5. The Data Protection Authority as had different names, aligned to the prevailing Privacy Act. Currently it is named Authority Personal Data (in Dutch: “Autoriteit Persoonsgegevens”), before that the Personal Data Protection Board (in Dutch: “College Bescherming Persoonsgegevens”) and initially the Registration Office (in Dutch: “Registratiekamer”).
  6. Currently the KNVI, the Royal Netherlands Society of Information Professionals
  7. EDPACS was issued by the EDPAA (EDP Audit Association); currently, the ISACA Journal is published by ISACA, the International Security Audit Controls Association.

Spanning fifty years of IT & IT Audit with only four Editors-in-Chief

To commemorate the fifty-year milestone of Compact, the acting Editor-in-Chief interviewed his three predecessors. The early years and history of fifty years of Compact are covered, as well as their expectations for the future of Compact as disseminator of knowledge and good practices.

Editors-in-Chief of Compact magazine


Dick Steeman, retired, Editor-in-Chief 1974 – 1994

Dries Neisingh, retired, Editor-in-Chief 1994 – 2002

Hans Donkers, ex-partner KPMG, founder WeDoTrust, Editor-in-Chief 2002 – 2015

Ronald Koorn, partner KPMG, Editor-in-Chief 2015 – current

What were remarkable developments in your Compact era?

We started with Compact when the punch cards where still around, while financial institutions and multinationals began to use new IBM systems with keypunch and programming capabilities (S/360, S/3) that were far more efficient in automating their massive administrative processes. Initially, the accountants used their own computer “for auditing around the computer”. In the early days, the audit focus was on data centers and the segregation of duties within IT organizations.

The knowledge of programming lacked at accounting firms in the seventies, therefore we first wrote articles on programming, testing and data analytics for our Financial Audit colleagues. Clients such as Heineken, KLM and ABN AMRO were keen on obtaining Compact as well. That’s how the magazine expanded. Due to the influence of Herman Roos and KPMG’s Software Engineering unit, Compact articles also addressed more technical subjects. So, the target group broadened beyond financial/IT auditors to IT specialists, IT directors and CFOs/COOs.

A nice anecdote is that when we issued Compact editions internally within KPMG the first few years, we were even proactively approached by the publishing company Samsom (now Wolters Kluwer) to offer their services for publication and distribution. We contractually had to issue four editions annually, which was in some years challenging to accomplish – especially besides all regular work. In other years, we completed four regular editions as well as a Compact Special for upcoming IT-related developments, such as Euro migration, Y2K (Millennium), ERP systems or new legislation (e.g., Privacy and Computer Criminality).

In 2001, we’ve issued our first international Compact edition (coordinated by the interviewer), as we wanted to address international variations and best practices. It was distributed to 25 major KPMG countries for their clients. Although, several non-native English authors overestimate their English writing proficiency.

Compact has always been focused on exchanging good practices and organizations are quite keen on learning from leading companies and their peers. Therefore, we changed the model from a – partly paid – subscription model, where authors were paid as well, via a controlled circulation model to a publicly available magazine. Writing articles was also an excellent way for less experienced staff to dive into a subject and structure it well for (novice) readers to understand. Of course, we’ve also been in situations where we had to hunt for (original) copy and actively entice colleagues to showcase their knowledge and experience in order to adhere to our quarterly publishing schedule. Several authors never completed their epistle, but luckily we always managed to publish a full edition.

We’re all pleasantly surprised by the current depth and quality and that Compact survived this long!

The name Compact was derived from COMPuter & ACcounTant. What do you see as the future target audience?

Besides the traditional target groups of IT auditors, IT consultants and IT-savvy financial auditors, it is also very useful for students. They can supplement their theoretical knowledge with practical examples of how technology can be applied in a well-controlled manner in a business context. There still are very few magazines highlighting the subjects that Compact addresses, such as IT Governance and IT Quality.

At least accountants (CPAs) need to know about IT due to the criticality of their financial audits, they cannot entirely outsource that to IT auditors. They should also address in their Management Letter whether “IT is in Control”. Of course, Compact is and should remain a good medium for communicating good practices to CFOs, CIOs and CEOs. Sometimes this knowledge sharing can be achieved indirectly via an IT-savvy accountant.

A brief history of IT & IT Auditing

As the past fifty years have been addressed in multiple articles in this edition, we have tried to consolidate the main trends in a summary table. We have aligned this summary with the model in the article “Those who know their past, understand their future: Fifty years of information technology: from the basement to the board” elsewhere in this Compact edition.

Several developments passed through different decennia; we have only indicated in which phase the main genesis took place.


How can the Editorial Board further improve Compact?

Compact has survived where other magazines were terminated are just faded-out. For commercial IT magazines it’s challenging to sustain a viable revenue model. So it is recommended to keep Compact free-of-charge and objective, and to emphasize the thoroughness of IT Audit and IT Advisory work based on a foundation of factual findings. That is a real asset in this ever-changing IT domain, where several suppliers promise you a “cloud cuckoo land” and where ISO certifications are skin-deep. Furthermore, it is recommended to include articles written with clients as well as photographs to make it more personal.

More authors could showcase their deep expertise with articles, which also guarantees the inflow of articles and the continuity of Compact. Furthermore, you can leverage the network of all internal and external authors and their constituents to market the expertise of authors. For instance, besides informing C Level, accountants, IT consultants and IT auditors of relevant IT themes, you could also inform a broader group in society. In the past, Compact authors were interviewed for newspapers, TV, industry associations, etc.

About the Editors-in-Chief

Dick Steeman is a retired KPMG IT Audit partner in the Netherlands. Together with Han Urbanus, he established KPMG EDP Auditors and launched Compact. He was the Editor-in-Chief of Compact from 1974 until 1994.

Dries Neisingh is a retired KPMG IT Audit partner in the Netherlands. During his working life he was a Chartered Accountant, a chartered EDP Auditor and professor of auditing reliability and security aspect of IT at the University of Groningen. He was involved with Compact right from the first issue in 1974 and was the Editor-in-Chief from 1994 until 2002.

Hans Donkers used to be a partner at KPMG and is one of the founders of WeDoTrust. He was the Editor-in-Chief of Compact from 2002 until 2015.

Ronald Koorn is an active partner at KPMG in the Netherlands and has been the Editor-in-Chief of Compact since 2015.

Compact editors

Besides the Editors-in-Chief, we also wish to specifically thank the following editors with their Editorial Board tenure of at least ten years:

  • Aad Koedijk
  • Piet Veltman
  • Rob Fijneman
  • Brigitte Beugelaar
  • Deborah Hofland
  • Pieter de Meijer
  • Peter Paul Brouwers
  • Maurice op het Veld
  • Jaap van Beek

And the Compact support staff over the decades: Henk Schaaf (editor), Sylvia Kruk, Gemma van Diemen, Marloes Jansen, Peter ten Hoor (publisher at Uitgeverij kleine Uil and owner of LINE UP boek en media), Annelies Gallagher (editor/translator), Minke Sikkema (editor), Mirjam Kroondijk and Riëtte van Zwol (desktop publishers).

Five years of GDPR supervision at a glance

Ever since the General Data Protection Regulation (GDPR) came into effect, privacy has become a prominent issue. Apart from the ongoing debates on the precise interpretation of legal provisions, there have been notable developments in the enforcement actions undertaken by the Dutch Data Protection Authority. In this article, we reflect upon the fines that have been imposed by the Dutch Data Protection Authority in recent years, which have drawn significant attention. As an organization, what measures should you take to avoid being subjected to similar enforcement actions?


The General Data Protection Regulation (hereinafter referred to as “GDPR”) was enforced in May 2016, and organizations were granted a two-year transition period until May 2018 to align their business operations with the GDPR. After this period, the Data Protection Authorities were authorized to enforce the GDPR, including the imposition of a maximum fine of 20 million euros or 4% of an organization’s annual global turnover; whichever is higher. Despite this, the Dutch Data Protection Authority (hereinafter referred to as “the Dutch DPA”) has been hesitant to impose fines, even after the expiration of the transition period. Only a few fines were issued in the initial years following 2018, as per the annual reports of the Dutch DPA. The reasons cited for this were the organization’s restricted capacity and the decision to allocate that capacity primarily towards significant, high-impact investigations, such as the childcare benefits scandal (“toeslagenaffaire”) or issues related to the coronavirus. It was not until 2021 that the Dutch DPA began to expedite its enforcement efforts, resulting in a greater number of organizations being fined, and for larger amounts. This trend was also observed among other European Data Protection Authorities – see Figure 1.1


Figure 1. Overview of the number and sum of fines from European privacy regulators ([CMS23]). [Click on the image for a larger image]

Given the Dutch Data Protection Authority’s recent implementation of regular fines, it is essential to reflect on the measures that organizations must undertake to ensure GDPR compliance and avoid facing a fine. This article examines one or more administrative fine decisions for each fine category as defined by the Dutch DPA.2 We provide a comprehensive discussion of the following categories for which fines have been imposed by the Dutch DPA:

  • inadequate basis for data processing;
  • insufficient fulfilment of information obligations;
  • Insufficient implementation of data subjects’ rights;
  • non-compliance with general data processing principles;
  • inadequate technical and organizational measures;
  • insufficient compliance with data breach notification requirements.


Figure 2. Overview of the number and sum of fines by fine category ([CMS23]). [Click on the image for a larger image]

Fine guidelines from the DPA

The Dutch DPA’s fine system is segregated into various categories, which are indicative of the severity of the data breach. Each category is linked to a particular fine range, within which the Dutch DPA decides the final sum of the fine, taking into account the circumstances of the infringement. These factors include the nature, duration and gravity of the breach, the extent of damage incurred, and the number of data subjects affected. Furthermore, if the Dutch DPA deems the fine range inappropriate for the breach, it may impose a fine beyond the set limit, subject to an absolute maximum of 20 million euros or 4% of the annual global turnover. For a comprehensive overview of the classification by category, refer to the Dutch DPA’s published policy rules ([AP19a]).

Administrative fine decisions by the DPA

Inadequate basis for data processing

Using fingerprints for employee time clocks based on consent

Personal data can be divided into two categories: regular and special categories of personal data. Regular personal data includes information such as name, address, and telephone number, whereas special categories of personal data comprise sensitive information such as health or political views. Due to the sensitive nature of the latter, the processing of special categories personal data is generally prohibited.

In April 2020, the Dutch DPA imposed a fine on a company for the unlawful processing of special categories of personal data ([AP19d]). The company used fingerprint scanners for employee timekeeping purposes. Fingerprints are classified as biometric data and fall under the category of special personal data. While Article 29 of the GDPR permits the processing of such data for security purposes, in this case, the fingerprints were only used for attendance and timekeeping, which does not fall under this exception. Employee consent could also be an exception, but this is generally not presumed to be freely given in a dependent relationship such as that between an employer and employee. Furthermore, obtaining consent is not enough; the company must also be able to prove it. In this case, the company was unable to prove consent, and as a result, was found to be in violation of Article 9 of the GDPR’s processing prohibition. The Dutch DPA imposed a fine of €725,000.

DPA’s investigation re-emphasizes the conditions imposed on the data subject’s consent. Consent is legally valid when given freely, clearly and the user is sufficiently informed. It is important that refusing consent must not have any adverse consequences in any form. Consent must also be demonstrable.

WIFI tracking on a general legal basis

The processing of personal data, including regular personal data, must be based on one of the legal bases provided in Article 6 of the GDPR. The municipality of Enschede claimed that it was allowed to process personal data for the purpose of measuring crowds in the city center on the basis of performing a public task. To achieve this, eleven sensors were used to continuously capture WIFI signals from passing citizens, which were then stored under a pseudonym. However, the public task that serves as the basis for the processing of personal data must be set out in a statutory provision. The municipality relied on Article 160 of the Municipalities Act, but the Dutch DPA deemed this provision to be too broadly formulated, and stated citizens could not infer based on this article that their personal data was being processed. Moreover, the basis of legitimate interest did not apply in this situation either. As a rule, a public body cannot rely on legitimate interest as a basis, as its tasks must be defined in a statutory provision. An exception to this is when a public body acts as a private party, but this exception did not apply in this situation.

In addition to the absence of a specific legal basis for WIFI tracking, the necessity requirement was not met as measuring crowds can be done in a much less intrusive way. Furthermore, the data was stored for a long period, which could allow citizens to be tracked and patterns of living to be identified. For instance, it was possible to determine where someone worked. Due to these multiple violations, the processing by the municipality of Enschede can be considered unlawful, and the Dutch DPA imposed a fine of €600,000 ([AP21a]).

The DPA’s investigation emphasizes that government organizations should be careful not to base processing operations on overly general provisions. In addition, a thorough assessment of the necessity requirement should also be made.

Using legitimate interest for purely commercial purposes

Article 6 of the GDPR mentions pursuit of a legitimate interest as the last possible basis for processing personal data. It is generally known that a public authority cannot rely on this, but there is still uncertainty as to whether a private party with exclusively commercial interests can do so.

In this regard, the Dutch tennis association “De Koninklijke Nederlandse Lawn Tennis Bond” (hereinafter referred to as KNLTB) provided personal data of its members to two sponsors for promotional purposes. One of the sponsors used members’ addresses to offer discount flyers, and the other sponsor approached members by phone with an offer. The KNLTB argued that the data was provided under the guise of a legitimate interest. However, according to the Dutch DPA, their reasoning cannot be considered a legitimate interest. For a successful appeal based on a legitimate interest, the processing must be necessary to serve the interest, the interest of the data subject must not outweigh the legitimate interest, and the interest must be a legitimate interest. According to the Dutch DPA, the latter requirement means that the interest must be named as a legitimate interest in (general) legislation or elsewhere in the law. It must be an interest that is protected and enforceable in law. Moreover, the (written or unwritten) rule of law must be sufficiently clear and precise. The rule of law to which the KNLTB attached processing is freedom of enterprise. The Dutch DPA called this interest insufficiently concrete to qualify as a legitimate interest. Consequently, a fine of €525,000 was imposed on the tennis association ([AP19e]).

The KNLTB contested the fine imposed by the Dutch DPA and appealed the decision. The national court, facing uncertainties about the interpretation of the concept of “legitimate interest,” referred preliminary questions to the European Court of Justice (hereinafter referred to as the ECJ). A preliminary question is a query that a national court can ask the ECJ to interpret European law. The position taken by the Dutch DPA has been previously contradicted by the European Commission and by the court in the VoetbalTV case, where the Dutch DPA took a similar stance on legitimate interest. It remains to be seen whether the Court of Justice will concur with the Dutch DPA’s interpretation.

Whether a private party can process personal data based on a legitimate interest with exclusively commercial interests is not sufficiently clear from the DPA’s fine decision. It is advisable to use this basis as restrictively as possible.

Insufficient fulfilment of information obligations

A privacy statement that does not match the target audience

In 2021, the widely used social media platform TikTok was fined €750,000 by the Dutch DPA for violating the requirements of the first paragraph of Article 12 of the GDPR ([AP21b]). This article stipulates that organizations must provide data subjects with information about the processing of their personal data in a concise, transparent, easily accessible, and understandable form using clear and simple language. Typically, this information is presented in the form of a privacy statement. However, TikTok’s privacy statement was only available in English to its Dutch users, who primarily consist of young people under the age of 16. Given this demographic, TikTok could not assume that their users were proficient in English.

It is therefore important for organizations to determine the target audience in advance. Based on this, a comprehensible privacy statement can be drafted using an average member of the intended target group as a benchmark. It is also important that a translation of the privacy statement is available if the target group speaks a different language. If there is a target group consisting of young people, who enjoy specific protection under the GDPR, a privacy statement that is also understandable for younger target audiences will have to be drafted.

Insufficient implementation of data subjects’ rights

An access request in line with Article 12 GDPR

Article 12 of the GDPR sets out specific regulations regarding the exercise of data subjects’ rights, including the right to access. This right requires that the provision of data be free of charge, unless the requests made by the data subject are unfounded or excessive, particularly in cases of repetitiveness. The assessment of what constitutes repetitiveness must be done on an individual basis. The Bureau Krediet Registratie (hereinafter referred to as BKR) found this out first-hand. The BKR provided two options for submitting a right of access request: either electronically (which required payment) or once a year by post, free of charge. The Dutch DPA deemed the default requirement of electronic payment for a right of access request to be incompatible with Article 12 of the GDPR and penalized the BKR with a fine of €830,000 ([AP19c]).

According to the Dutch DPA, the option of a free annual request for access by post did not alter BKRs violation of Article 12 of the GDPR. Similarly, limiting free access to personal data to once per year via post was also found to be in violation of this provision. Whether a request for access is excessive or unfounded should be determined on a case-by-case basis, and the fact that a data subject requests access more than once per year would not necessarily make the request excessive.

It is important to establish the identity of the data subject when responding to a request for access. However, DPG Media was fined by the Dutch DPA for requesting a copy of proof of identity from data subjects in order to establish their identity ([AP22a]). The DPA considered this too intrusive, especially because of the sensitive nature of identification documents. The DPA stated that the least intrusive way to identify data subjects should be used, for example by combining information already held by the controller. This could include a customer number combined with an address.

It is therefore important to ensure a free request for inspection and that, if there appears to be an excessive request, it is assessed on an individual basis. In addition, it is important for the identification process that the least intrusive means of identification is chosen. In any case, sending a copy of an identification document is considered to be too intrusive.

Non-compliance with general data processing principles

A European representative for organizations outside Europe

The GDPR applies both to organizations based in the European Union and those based outside the EU if they focus on processing personal data of EU citizens. Such was the experience of The website did not comply with the requirement of Article 27 of the GDPR to appoint an EU representative in writing. They were under the impression that because they were not based in the EU, they did not have to comply with the GDPR. However, this was not the case and it resulted in a fine of €525,000 ([AP20d]).

Due to the international nature of the internet, organizations will more than likely process personal data of EU citizens at some point. If this is the case and your website is available in the EU, for example, and the euro can be used as currency for transactions, you will probably have to comply with the obligations of the GDPR. In that case, you also need to appoint an EU representative.

Inadequate technical and organizational measures

Inadequate security of internal systems

One of the first fines imposed by the Dutch DPA since the GDPR came into effect was against the HagaZiekenhuis. The hospital was fined because its medical patient records were not adequately secured, resulting in numerous employees accessing the files of a Dutch celebrity without any legitimate reason to do so. The hospital was obligated to monitor access, according to the Dutch DPA. Moreover, the security measures were found to be inadequate because multi-factor authentication was not implemented. As a result of the insufficient security measures, the HagaZiekenhuis was fined €460,000 ([AP19b]).

Two years later, a similar situation occurred at another hospital, Amsterdam’s OLVG. Inadequate monitoring of accessed records and insufficient security resulted in a fine of €440,000 imposed by the Dutch DPA ([AP20c]). Inadequate security of internal systems has been seen in several organizations. For example, maintenance company CP&A was fined €15,000 for inadequately securing its absence registration system ([AP20a]), the Ministry of Foreign Affairs was fined €565,000 for inadequate security of the National Visa Information System (NVIS) ([AP22b]), and the UWV had taken insufficient technical measures to secure the process for sending group messages, which resulted in a fine of €450,000 ([AP21c]).

Just like hospitals, health insurers deal with medical data of data subjects, and therefore, authorization should be established to restrict access to sensitive personal data to include only those employees who need it to perform their duties. However, the Dutch DPA conducted an investigation and found that marketing staff at health insurer Menzis had access to sensitive personal data. It is important to note that accessing personal data is also considered processing under the GDPR. Apart from inadequate access rights, Menzis also failed to maintain log files. Although there was no evidence that the marketing staff accessed this personal data, the mere possibility of such access was enough for the Dutch DPA to impose an order subject to fines for noncompliance on Menzis ([AP18]).

Viewing personal data also falls under processing according to the GDPR. It is advisable to allow only employees for whom it is necessary to have this access to this data. It is also important to ensure that systems can track who can view personal data, so that unauthorized access can be monitored.

Insufficient password requirements

In addition to multi-factor authentication, it is important to establish password requirements to prevent data breaches. In September 2019, Transavia’s systems were hacked through two accounts belonging to the company’s IT department. The hackers were able to access these accounts easily, as they did not require multi-factor authentication and the passwords were easily crackable, such as “12345” or “Welcome.” Additionally, these accounts provided sufficient access for the hackers to breach the larger systems without further security thresholds in place. Despite Transavia’s timely reporting of the data breach, the Dutch DPA imposed a fine of €400,000 ([AP21d]) due to its seriousness.

The level of security referred to in Article 32 GDPR that should be strived for depends on the risk associated with the processing. An adequate security level is determined based on various factors, such as the nature and scope of the personal data being processed.

Insufficient compliance with data breach notification requirements

Failure to report data breaches (on time)

The final category of fines pertains to the issue of data breaches, which unfortunately is a common occurrence in many organizations. Unauthorized persons may gain access to personal data, or such data may be inadvertently released or destroyed. Such an occurrence is referred to as a data leak, which must be reported to the Dutch DPA within 72 hours if there is a potential risk to the data subject(s). For instance, PVV Overijssel experienced a data leak when an email was sent to 101 recipients, making all the email addresses visible to everyone. As a result of failure to comply with the notification requirement, PVV Overijssel was fined €7,500 ([AP20b]). was also fined for a data breach in which an unknown third party gained access to the personal data of data subjects. Because did not report the data breach to the Dutch DPA within 72 hours of discovery, this ultimately resulted in a fine of €475,000 ([AP20e]).

Ideally, of course, you would like to prevent a data leak, for instance by taking appropriate technical and organizational measures, but this will not make it one hundred percent impermeable. In the event of a data leak, it is essential to report the data leak (in good time) in order to limit the damage for those involved and your organization as much as possible. Swift action should be taken to plug the data leak and by tightening up security, a data leak can be prevented in the future.


Although the Dutch DPA has only issued 22 public fines in recent years, this should not lead organizations to believe that they are exempt from Dutch DPA investigations and potential fines. It is a misconception that only large organizations are targeted by the Dutch DPA, as was demonstrated by the fine imposed on PVV Overijssel.

It is important to note that the Dutch DPA has significant discretion in terms of the sanctions it can impose. The range of enforcement options includes fines, orders subject to fines for noncompliance, or a combination of both. The Dutch DPA can also issue reprimands or formal warnings, although the latter appears to be used less frequently. In fact, the last formal warning issued by the Dutch DPA was in 2020 ([AP20f]).

Organizations should strive to avoid sanctions by drawing lessons from the Dutch DPA’s overview of fines. One key takeaway is the importance of having a lawful basis for processing personal data. For example, a company was fined for unlawfully processing special personal data in the form of fingerprints, while a municipality was fined for collecting location data of citizens in a disproportionate manner. The Dutch DPA has also provided guidance on the meaning of “legitimate interest” in the context of the Dutch tennis association’s fining decision, although this should not be taken as the final word on the matter.

Another crucial aspect is complying with information obligations, ensuring that the target audience is taken into account. Organizations should also implement data subjects’ rights effectively and employ appropriate technical and organizational measures, such as access restrictions, logging and monitoring, multi-factor authentication, and password requirements. Lastly, organizations should comply with the notification obligation towards the Dutch DPA in the event of a data breach.

What’s next?

Historically, we have seen that (published) fines were often complaint initiated. We expect this trend of the “beep system” to largely continue. It is therefore important for an organization to set up a good privacy complaints procedure, in order to resolve complaints themselves as much as possible.

The preliminary questions raised because of the fine decision on the Dutch tennis association could have major implications. Currently, the Dutch DPA differs from other Data Protection authorities, in the sense that a mere profit motive cannot be considered a legitimate interest. If confirmed by the Court, this will have major implications for all organizations that often rely on this basis.

Looking ahead, we also anticipate that the Dutch DPA will continue to pay close attention to new developments in artificial intelligence (AI), algorithms, data trading and profiling in the following years. These topics, while not as clearly reflected in the published fines, have been focal points of the DPA in recent years. Given their increasing significance in modern society and the rapid developments in these areas, we anticipate that these issues will remain a focal point for the Dutch DPA. For example, since January 2023, there is a new organizational unit within the Dutch DPA, the Algorithms Coordination Directorate, which will specifically oversee the use of algorithms.

Although the draft budget of the Ministry of Justice and Security includes a budget increase for the Dutch DPA, for instance for the establishment and work of an algorithm supervisor, the Dutch DPA mentions that its budget is insufficient to properly handle all supervisory tasks ([AP22c]). They must work with only a quarter of the budget compared to other Dutch supervisory authorities (such as the AFM or ACM, that have a budget of €100 million). We expect continued yet steady growth towards a sufficient budget over the next decade.


  1. Note that these numbers reflect only the fines disclosed and do not reflect the full number. In addition, these numbers reflect only actual fines and do not include cases where correct follow-up was given after a warning or order under fine. See also [DPA].
  2. Based on the different fine categories, a selection has been made from the published fines.


[AP] Autoriteit Persoonsgegevens (n.d.). Boetes en andere sancties. Retrieved from:

[AP18] Autoriteit Persoonsgegevens (2018, February 15). Last onder dwangsom en definitieve bevindingen. Retrieved from:

[AP19a] Autoriteit Persoonsgegevens (2019, February 19). Boetebeleidsregels Autoriteit Persoonsgegevens 2019. Retrieved from:

[AP19b] Autoriteit Persoonsgegevens (2019, June 18). Besluit tot het opleggen van een bestuurlijke boete en een last onder dwangsom. Retrieved from:

[AP19c] Autoriteit Persoonsgegevens (2019, July 30). Besluit tot het opleggen van een bestuurlijke boete. Retrieved from:

[AP19d] Autoriteit Persoonsgegevens (2019, December 4). Besluit tot het opleggen van een bestuurlijke boete. Retrieved from:

[AP19e] Autoriteit Persoonsgegevens (2019, December 20). Besluit tot het opleggen van een bestuurlijke boete. Retrieved from:

[AP20a] Autoriteit Persoonsgegevens (2020, March24 ). Besluit tot het opleggen van een bestuurlijke boete. Retrieved from:

[AP20b] Autoriteit Persoonsgegevens (2020, June 16). Besluit tot het opleggen van een bestuurlijke boete. Retrieved from:

[AP20c] Autoriteit Persoonsgegevens (2020, November 26). Besluit tot het opleggen van een bestuurlijke boete. Retrieved from:

[AP20d] Autoriteit Persoonsgegevens (2020, December 10). Besluit tot het opleggen van een bestuurlijke boete en een last onder dwangsom. Retrieved from:

[AP20e] Autoriteit Persoonsgegevens (2020, December 10). Besluit tot het opleggen van een bestuurlijke boete. Retrieved from:

[AP20f] Autoriteit Persoonsgegevens (2020, December 15). Formele waarschuwing AP aan supermarkt om gezichtsherkenning. Retrieved from:

[AP21a] Autoriteit Persoonsgegevens (2021, March 11). Besluit tot het opleggen van een bestuurlijke boete. Retrieved from:

[AP21b] Autoriteit Persoonsgegevens (2021, April 9). Besluit tot het opleggen van een bestuurlijke boete. Retrieved from:

[AP21c] Autoriteit Persoonsgegevens (2021, May 31). Besluit tot het opleggen van een boete. Retrieved from:

[AP21d] Autoriteit Persoonsgegevens (2021, September 23). Besluit tot het opleggen van een boete. Retrieved from:

[AP22a] Autoriteit Persoonsgegevens (2022, January 14). Besluit tot het opleggen van een boete. Retrieved from:

[AP22b] Autoriteit Persoonsgegevens (2022, February 24). Besluit tot het opleggen van een boete en een last onder dwangsom. Retrieved from:

[AP22c] Autoriteit Persoonsgegevens (2022, October 24). Informatievoorziening voor de beantwoording van feitelijke vragen door de minister voor Rechtsbescherming inzake de vaststelling van de begrotingsstaten van het Ministerie van Justitie en Veiligheid voor het jaar 2023 [Official message]. Retrieved from:

[CMS23] CMS.Law (2023). GDPR Enforcement Tracker – list of GDPR fines. Retrieved on February 17, 2023, from:

Data ethics and privacy: should, can, will?

When using technology such as artificial intelligence (AI), ethical considerations play a major role in our society. There is a reason for this: as we increasingly face public scandals related to the misuse of personal data, the call for responsible policies concerning ethics and privacy is growing. The trust that customers, employees and citizens have in both public and private organizations is at stake. The critical question for organizations is: how do we get the most out of what data and technology have to offer while simultaneously addressing ethical and privacy concerns?

This article takes a closer look at data ethics and the intersections with privacy, discusses the legal developments and provides practical tips on how to get started setting up and strengthening data ethics in organizations.


May 25, 2023, marks the fifth anniversary of the European privacy law, the General Data Protection Regulation (GDPR). For many organizations, privacy protection is now an integral part of their business operations. However, there is still more that can be done.

Even with the introduction of the GDPR, Dutch people’s confidence that companies and government organizations handle their privacy well remains low. A 2021 privacy survey of a sample of the Dutch population ([KPMG21]) showed that a quarter of Dutch people harbor considerable distrust of the government, and their trust in technology companies is even lower. This manifests itself in growing concerns about their own privacy.


Figure 1. The trust Dutch citizens have in their government is not very high, but the trust they have in large technology companies appears to be even lower ([KPMG21]). [Click on the image for a larger image]

Whereas trust in government agencies and companies is declining, interest in privacy is increasing. An overwhelming majority of the Dutch (86 percent) think it is good that there is a lot of focus on privacy ([KPMG21]). This is substantially more than at the beginning of 2018, when the KPMG survey “A little privacy please” showed that 69 percent considered privacy an important issue ([KPMG18]; see also [ICTM18]). In addition, this interest is confirmed by the fact that the Netherlands is one of the leaders within the European Union in terms of reporting data breaches ([DLAP21]). One explanation for this increasing attention to privacy is the continuing developments in the digital transformation that society is undergoing. As a result, this question is now at the forefront of privacy and data ethics debates: how responsible or ethical are all the technical developments that succeed one another in a relatively short period of time?

The Dutch Data Protection Authority, the Autoriteit Persoonsgegevens (AP), concluded in its annual report for the year 2021 that society has reached the point where digitization can no longer take place without ethical value considerations ([AP22b]). In their private lives, as consumers and citizens, people are constantly confronted with new technologies, although they may not always realize it. Technologies such as algorithms have taken a structural place in our daily lives. Whether it is making a purchase in an online store and paying afterwards, taking out a loan from a bank, or applying for a grant from the government, there is a good chance that the application will be assessed using technology.

New technologies bring tremendous opportunities for developing new products and services, ensuring a better customer experience, and improving efficiency in the workplace. However, to ultimately continue the successful integration of new technologies, organizations must use them responsibly. Data ethics and privacy play an essential role in this regard. The GDPR provides guidance on the ethical use of personal data, but data ethics is broader than just the collection, use, retention and deletion of personal data.

What is data ethics?

Ethics is about right and wrong. About what society, citizens or consumers think is fair, just and acceptable ([Meel]). Viewed from a privacy perspective, data ethics is not so much about whether an organization may process personal data and whether the processing meets the requirements of the GDPR, but rather it is about a question that is more fundamental. It is about a question that is more fundamental. Even if organizations could or want to do something (e.g., from both a legal or technological perspective), organizations must continually ask themselves whether they should from an ethical perspective. In other words: it is allowed, but is it the right thing to do?

Data ethics requires a different way of thinking within an organization that focuses on the impact a data operation has on people and society. Data ethics revolves around this question: from an ethical perspective, is what we want to do or have the capabilities for, the right thing to do? A privacy professional may see common ground with conducting a Data Protection Impact Assessment (DPIA), which identifies the privacy risks of a personal data processing activity to data subjects. However, data ethics is much broader than privacy. Data ethics is about non-discrimination, avoiding bias and acting transparently and responsibly toward people affected by the use of technology. The following example illustrates this ([Ober19]).

A hospital in the United States used a commercial algorithm to determine which group of patients might need more care than average. Algorithms are widely used in the United States by hospitals, government agencies, and other healthcare providers. It is estimated that about 200 million people are assessed annually using such algorithms.

In this case study, an algorithm ranked patients based on the extent to which they used medical care. Patients in the 97th percentile and above were marked as “high risk” by the algorithm and were automatically enrolled in a health program. Wanting to improve the health of high-risk patients is a noble goal, but in retrospect it was found that a bias based on race was present in the algorithm. According to the algorithm, patients of color are healthier than white patients, but this turned out to be the wrong conclusion.

The reason for this bias could be traced to the input data. People of color are less likely to use healthcare services than white people and spend an average of $1,800 less per year on health care than white people. The algorithm inferred that people of color must be healthier, since they use fewer healthcare services. However, this assumption was incorrect. The dataset on which the algorithm was based, consisted of 44,000 white patients and only 6,000 patients of color. Because of this limited input data, the algorithm made incorrect assumptions that had a negative impact on healthcare access for a certain group of people.

Legal developments in data ethics: the AI Act

When it comes to data ethics in the public debate, it is regularly about the reliability and the increasing use of data and algorithms in private and public organizations, and also about how to use algorithms in a controlled, ethical way. Worldwide, the European Commission has taken the lead in regulating the use of artificial intelligence. This has resulted in a bill called the Artificial Intelligence Act (AI Act).

The AI Act aims to establish a framework for responsible AI use. According to the Act, AI systems must be legally, ethically and technically robust and must respect democratic values, human rights and the rule of law. The fact that the AI Act is focusing on regulating the use of AI from a technical and legal perspective, is not surprising. What is unique to this Act, is the strong emphasis on data ethics. The aim is to reach a final agreement on the AI Act this year (2023), but there is no concrete deadline. When a final agreement is made, there will be a grace period of around two years to allow affected parties to comply with the regulations.

The AI Act introduces new obligations for companies and governments, as well as a supervisory authority and a penalty system. These are detailed in the sections below. It is important to emphasize that no final agreement has been reached on the exact content of the AI Act. In other words, legal developments (and proposed amendments to the AI Act1) are rapidly following one another. For example, adjustments to the AI Act are currently being considered to deal with developments around ChatGPT and similar generative AI models. In other words, the legal and technical developments that may have an impact on the AI Act, are worth keeping an eye on.

Conformity assessment for high-risk AI systems

The AI Act introduces conducting a so-called conformity assessment by an outside body. In other words, if an AI system could pose a high risk to the health, safety or fundamental rights of people, its providers must have an assessment conducted by an independent third party to identify and mitigate those risks. These assessments help ensure compliance with the AI Act. For AI systems with limited or minimal risk, less onerous requirements apply. In that case, a self-assessment or transparency requirement is sufficient.

The legislative proposal for the AI Act currently states that the European Commission is the body that determines what constitutes a high-risk system and when a mandatory conformity assessment must be conducted. AI systems that will potentially qualify as high risk include systems for migration and asylum, critical infrastucture, law enforcement, and product safety. In addition, it is currently being examined whether generative AI models such as ChatGPT should also be regarded as high risk.

Based on the proposed AI Act, there is also the possibility that an AI system classifies as a high-risk system, but a conformity assessment is not required. In such cases, a self-assessment is sufficient. Currently, the AI Act states that the European Commission will determine for which (high-risk) AI systems a self-assessment should be performed.

High-risk AI systems must meet strict requirements under the AI Act before they can be marketed. Measures to be implemented under the proposed AI Act include: establishing a risk management process that specifically oversees the AI application, setting high data quality requirements to prevent discrimination, maintaining logs, establishing documentation around accountability, ensuring transparency, establishing a system in which people oversee the AI applications, and ensuring security and accuracy standards.

AI database for high-risk systems

Another new aspect of the AI Act relates to the creation of an AI database in which high-risk AI systems are to be registered. The AI Act currently states that the database will be managed by the European Commission and aims to increase transparency and facilitate oversight by regulators.

Introduction of national AI supervisor

The proposed AI Act currently contains an obligation for each member state to form or designate an authority to supervise compliance with the AI Act. This national supervisory authority will participate in the European AI Board (EAIB), which will be chaired by the European Commission and will also include the European Data Protection Supervisor (EDPS). Recently, the Dutch Data Protection Authority, the AP, was appointed as algorithm supervisor in the Netherlands. With this appointment, the Netherlands is already fulfilling its future obligation under the AI Act.

Fines for failure to comply with AI Act

Like the GDPR, the AI Act will include a penalty system. The biggest fine that can be imposed under the Act is a fine of up to 30 million euros or 6 percent of annual global turnover, whichever is higher. This is 2 percent higher than the highest fine category under the GDPR. Aside from the moral obligation for companies to take data ethics and privacy seriously, there will be financial incentives to set up AI systems in accordance with the upcoming AI Act.

How to put data ethics into practice

It is clear that the AI Act will make its appearance in the future. Nevertheless, it is important to realize that legislation is only the basis and that acting ethically requires more than complying with new legislation. What can organizations do today to promote the ethical handling of data and raise awareness in their organization?

Contact the Privacy Officer

The concerns that exist about AI systems are in many cases about the use of personal data. Despite the fact that privacy and data ethics are two different topics, they often overlap. This means that if an organization has appointed a Privacy Officer, in all likelihood they are already working on the topic of data ethics in the light of the use of personal data.

The GDPR has an obligation to conduct a DPIA on personal data processing activities that may result in a high privacy risk. In many cases, this obligation will also apply to AI systems that process personal data. Even though the AI Act focuses on the quality of AI systems while the GDPR focuses on the use of personal data, the two laws converge when personal data is used in AI systems. Therefore, Privacy Officers can be a good starting point to prepare the organization for the upcoming AI Act. They can help identify which systems in the organization use AI and whether these systems may pose a high risk.

Establish an ethical framework

The first step to securing data ethics in an organization is to establish what ethical handling of data specifically means for the organization. This can be done by formulating values and principles around the topic of data ethics, for example an ethical framework or compass. It is important that the ethical principles align well with the culture and core values of the organization and are recognizable to employees from all levels of the organization ([Meel]).

Organize independent oversight

Data ethics is an abstract topic, but it needs a very concrete interpretation. Most organizations are not (yet) equipped to deal with the ethical dilemmas that arise when new technologies, such as algorithms, are deployed. Furthermore, there is often no monitoring of the integration of ethical principles into business operations. A powerful tool in both establishing ethical principles and closing the gap between principles and practice, is the establishment of effective and independent oversight. This can be done by an independent committee, internal audit teams, or an independent third party ([Meel]).

Conduct a Fundamental Rights and Algorithm Assessment

When an organization works with algorithms, it is wise not to wait for the introduction of the AI Act and to already start identifying risks when using algorithms. This can be done by conducting a Fundamental Rights and Algorithm Impact Assessment (FRAIA). FRAIA is the English translation of the Dutch “Impact Assessment Mensenrechten en Algoritmes” (IAMA). The FRAIA was developed by the Utrecht Data School and helps to make careful decisions about the use of algorithms. The FRAIA is mandatory for government agencies and can also help other organizations gain a better understanding of the considerations and risks involved in the decision-making process concerning the deployment of algorithms. It is also a way to “practice” the impending assessments that the AI Act will most likely introduce.

According to FRAIA, the decision-making process regarding algorithms can be divided into three main stages:

  • Stage 1: preparation. This stage is about deciding why an algorithm will be used and what its effects will be.
  • Stage 2: input and throughput. This stage is about the development of an algorithmic system. In this stage, it is decided what the algorithm must look like, and which data is being used to feed the algorithm. Within this stage, the FRAIA further distinguishes between:
    • Stage 2a: data, or input. This involves asking questions that pivot on the use of specific data and data sources.
    • Stage 2b: algorithm, or throughput. This involves questions regarding the algorithm, and its operation and transparency.
  • Stage 3: output, implementation and supervision. This stage is about how to use the algorithm, i.e., about the question which output the algorithm generates, how that may play a role in policy or decision-making, and how that can be supervised.

Source: [MBZK21]


There is currently no clear body of standards, laws, or case law in the field of data ethics. While the AI Act aims to fill this gap, ethical handling of data requires more than following the letter of the law. Take the example of the GDPR, Europe’s data privacy law. The GDPR gives us more control over our personal data, but the ethical principle of privacy is a much broader and abstract issue than simply protecting data. Therefore, an organization that sees its customers’ privacy as its responsibility, will have to think beyond just avoiding a GDPR, and soon, an AI Act fine ([Meel]).


  1. The final content of the AI Act is currently still being negotiated in Europe. This means that this article provides an insight into the developments concerning the AI Act but cannot provide certainty on the final content of the AI Act.


[AP22a] Autoriteit Persoonsgegevens (2022, March 15). AP Inzet Artificial Intelligence Act. Retrieved from:

[AP22b] Autoriteit Persoonsgegevens (2021). Jaarverslag 2021. Retrieved from:

[DLAP21] DLA Piper (2021, January 19). Nederland tweede van Europa in aantal gemelde datalekken sinds invoering AVG. Retrieved from:

[EC21] European Commission (2021). Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts. Retrieved from:

[ICTM18] ICT-Magazine (2018, October 16). Onderzoek KPMG: Nederlander nauwelijks bekend met nieuwe privacyrechten. Retrieved from:

[Meel] Van Meel, M. & Remmits, Y. (n.d.). Risico’s van algoritmes en toenemende vraag naar ethiek: Deel 4 – De burger en klant centraal bij het gebruik van algoritmes [KPMG Blog]. Retrieved from:

[KPMG18] KPMG (2018). Een beetje privacy graag. [Report can be requested at KPMG.]

[KPMG21] KPMG (2021, October). Meer zorgen over privacy: Het resultaat van ons privacy onderzoek onder consumenten. Retrieved from:

[MBZK21] Ministerie van Binnenlandse Zaken en Koninkrijksrelaties (2021, July). Impact Assessment Mensenrechten en Algoritmes. Retrieved from:

[Ober19] Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019, October 25). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464), 447-453, Retrieved from

The sustainability reporting journey of Swiss luxury watchmakers

The theme of sustainability has gained momentum in recent years as companies are pushed by regulatory pressure to step up their sustainability initiatives and transparency on all three pillars of ESG (environmental, social, and governance). In 2018, the WWF analyzed the maturity of the 15 biggest Swiss watch companies in relation to sustainable processes, governance, and initiatives. The results showed that most of the reviewed companies communicated very little about their ambition to become more sustainable or did not even communicate at all. In October 2022, KPMG watch and luxury experts reviewed the progress of sustainability initiatives, social media sentiment and news coverage related to the sustainability activities of the 15 brands from the 2018 WWF report. The review showed that most of the 15 brands have online content dedicated to sustainability on their brand websites, but not all the brands are sufficiently transparent on the environmental impact of their supply chain. Further, the social media sentiment analysis suggests that consumers’ desire to purchase a luxury watch might not be triggered by the sustainability efforts of the brand. However, the consumer might forgo the purchase of the watch when the brand is perceived as not active in sustainability efforts.


In its 2018 report, A precious transition – demanding more transparency and responsibility in the watch and jewelry sector, WWF Switzerland ([Grun18]) analyses the ecological and societal impacts of the 15 leading Swiss watch manufacturers and presents examples of how they could improve their environmental footprint. The results were categoric: the majority of corporations in the watch and jewelry industry were either non-transparent or didn’t appear to have deep aspirations to improve their sustainability. For this industry to catch up, WWF Switzerland made the following recommendations for manufacturers and underlined the power that consumers have to initiate the change in this industry:

  • Improve the value chain’s transparency
  • Source responsible raw materials
  • Embrace sustainability in the company’s practices
  • Report on pertinent sustainability issues in public
  • Work together with other players in the industry
  • Innovate for circularity

Quantifying the actual benefits for companies that are committed to sustainability is in many cases not straightforward. Although the impact on brand value and consumer opinions can be immediate, the financial benefit might only come in the long term. Our watch and luxury industry experts assessed the progress of sustainability initiatives, social media sentiment and news coverage related to ESG (Economic, Social and Governance) activities of these brands. Further, the article highlights areas of focus that can enable the luxury watch brands to drive their sustainability agendas. We analyzed primary and secondary data for the purpose of this article.

Was there any progress since 2018?

In October 2022, the (international) websites of each of the 15 watch brands that were part of the original scope of the 2018 WWF Report (Figure 1) were visited and analyzed. The 15 brands in scope of the WWF report include:

  • Richemont Group: Cartier, IWC, Piaget, Vacheron Constantin and Jaeger LeCoultre
  • Swatch Group: Omega, Longines, Tissot, Swatch and Breguet
  • LVMH Group: Tag Heuer
  • Independent brands: Rolex, Audemars Piguet, Pathek Philippe and Chopard

The purpose of the online review was to learn of the progress made by these brands in the transparency of their supply chain, their use of responsible materials and the integration of sustainability into their business practices. This approach was designed to accommodate the behavior of an environmentally conscious consumer whose decision to buy a watch from one of the 15 brands is influenced by whether the brand would meet good environmental standards. What we learned from the brands’ websites shows mixed signals of improvement since 2018. Where some brands openly communicate the governance, policies and processes they have in place to reduce their climate impact and increase transparency on the sourcing of materials, others continue to lack transparency and appear to have little aspirations to improve their sustainability. Only 8 of the 15 brands we looked at have an established a specific sustainability page in their websites.


Figure 1. WWF assessment of 15 leading Swiss luxury watch makers, 2018. [Click on the image for a larger image]


The websites of Richemont, Swatch and LVMH have a dedicated page for sustainability and the companies also publish a sustainability report, in accordance with the Global Reporting Initiative (GRI) standards. Most of the information in the groups’ reports applies to the brands the groups own and is thorough and very informative. However, although the websites of the individual Richemont brands have dedicated pages for sustainability with comprehensive information, most of the relevant data is consolidated at group level with few sustainability use cases illustrated for the individual brands. IWC is the only brand we observed that has published its own sustainability report, separate from, although in alignment with, the group (Richemont). Omega reports some information on the Responsible Jewelry Council and the Kimberley Process whereas Audemars Piguet’s reports on its commitments to sustainability. Chopard also has a dedicated webpage concerning sustainability.

Business practices

Except for some of the 15 brands, sustainability governance, policies, clear roles and responsibility and a reporting line to the Board, are presented in the brands’ websites or can be inferred from the groups’ sustainability reports. Governance would typically include the establishment of a sustainability steering committee, or equivalent, the figure of a Chief Sustainability Office, or equivalent, as well as sustainability teams and officers at different levels of the organization. Governance and risk assessment frameworks are well-illustrated in the sustainability websites for Chopard, IWC and Audemars Piguet.

Further, two-thirds of the 15 brands are members of the Responsible Jewelry Council (RJC) and are certified by the Council’s Code of Practices (CoP). Achieving certification on Code of Practices demonstrates a brand’s commitment to responsible sourcing and promotes transparency and traceability in a brand’s supply chain. These brands have in fact established supply chain policies, sourcing policies in compliance with the OECD Due Diligence for Responsible Business Conduct for precious metals, as well as supplier codes of conduct to which their suppliers are required to adhere. The Richemont, Swatch and LVMH have also listed the Sustainable Development Goals (SDGs)1 they have committed to and their associated timeline for implementation. They have also mentioned their performance of Life Cycle Assessments (see Box 1) in their sustainability reports.

Following the global RJC CoP certification, the next major milestone in the sustainability reporting journey of a watch or jewelry brand is the RJC Chain of Custody (CoC) certification. This certification provides assurance to consumers on how a brand’s products and materials have been sourced, traced and processed through the supply chain. Adherence to CoC standards is ensured through ongoing independent audits. Across the 15 brands in scope, we found mentions of a CoC certification at IWC, Vacheron Constantin, Omega and Audemars Piguet.

Climate neutrality

Greenhouse Gas Emissions (GHG) measurement (Scope 1, 2 and Scope 3 of the establishment) and targets for carbon footprint reduction are comprehensively reported in the sustainability reports of Richemont, the Swatch Group and LVMH. The measurements reflect a varying degree of maturity from established emission targets for Scope 1 and 2, to targets still being defined and implemented for Scope 3 emissions. Most of this information is reported on a consolidated basis and breakdowns for the individual brands are not available, except for some examples of initiatives and efforts at brand level. Richemont, Swatch Group and LVMH are also committed to the Science-based Target Initiative (SBTi), which provides target setting methods and guidance to companies to set science-based targets in line with the latest climate science( Brands like Cartier, Piaget, Vacheron Constantin and IWC claim to reach carbon neutrality through the offsetting from the funding of environmental projects, other brands like Jaeger LeCoultre and Chopard have reported 40% reduction in their carbon footprint.

The environment

Most of the brands are fairly vocal about their contribution to preserving the environment and promoting biodiversity. Efforts in this area are focused on the brands’ headquarters, production sites and boutiques and vary with different degrees in the use of 100% renewable energy, the use of solar panels and circular water systems, chemical wastewater and scrap metals disposed by independent third parties, the elimination of single-use plastic bottles, the availability of sustainable transport for employees, boutiques with LED lights and with LEED status (Leadership in Energy and Environmental Design), a green building certification program used worldwide.


Figure 2. The Audemars Piguet Manufacture des Forges building, which opened in Le Brassus in 2008, was the first Minergie-ECO® certified industrial site in Switzerland (source: Audemars Piguet). [Click on the image for a larger image]

The majority of the brands have also changed or are in the process of changing their packaging to more sustainable materials. For example, packaging made of paper foam that is compostable and recyclable as well as packaging that is compliant with the Forestry Stewardship Council (FSC)2 and the Programme for the Endorsement of Forest Certification (PEFC)3, or other sustainable packaging solutions which follow the OEKO-TEX Standard 1004.

Sourcing materials

For the Richemont brands, the Swatch Group, Tag Heuer and Chopard confirmed in their sustainability reports that the diamonds they purchase are compliant with the Kimberley Process Certification5, so brands communicate the commitment to the removal of conflict diamonds from their supply chains. For the Richemont brands and Chopard, we also found evidence of their adherence to the System of Warranties (SoW) from the World Diamond Council, which ensures that all diamonds traded are Kimberley Process compliant and have also been handled in accordance with universal principles of human rights, labor rights, anti-corruption and anti-money laundering. The SoW is applied each time the ownership of any natural diamond changes hands within the industry, both when exported or imported and when being sold in the same country.

As for the sourcing of gold, 99.6% of the gold purchased by the Richemont brands is CoC certified. Audemars Piguet reports 100% of the gold purchase as certified by an independent party and Chopard reports the use of 100% ethically produced gold (i.e. from RJC CoC certified suppliers and from artisanal mined gold produced in an responsible way).

With regard to the sourcing of materials for their watch straps, both Richemont and the Swatch Group communicated their adherence and that of their brands to the International Crocodilian Farmers’ Association (ICFA) as well as to the Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES), an international agreement between governments, aiming to ensure that international trade in specimens of wild animals and plants does not threaten the survival of the species. Several brands are experimenting or are already offering straps made of vegan alternatives, recycled, recyclable, compostable, as well as bio-based materials.

Box 1. Life Cycle Assessment (LCA) ([Swat22])

Choosing a sustainable design strategy is an essential part of product development. Based on the results of life cycle assessment, a comparison can be made between the environmental impacts of different materials, products of processes that perform the same function and have the lowest environmental impact throughout their life cycle are selected. The LCA is also used to identify opportunities for improving the environmental performance of the company’s products, including packaging, at different stages of their life cycle. This means that informed decisions can be made in new developments with regard to the procurement of raw materials, the selection of processes, end-of-life treatment, etc.

How sustainability is driving innovation 

Research ([Llor21]) shows innovation and sustainability are often two highly connected topics. Innovation can be a key enabler for companies to achieve their sustainability goals and this equally applies to the watch industry, where watch companies work to integrate sustainability in their business practice ([Yum21]).

In 2022, Swatch launched a new watch collection called the MoonSwatch, which combined the iconic Omega Speedmaster Moonwatch design elements with “Bioceramic”, an innovative material. Bioceramic is made up of two-thirds ceramic and one-third castor oil extract, a vegetable oil pressed from castor seeds. The MoonSwatch is one example of the several innovative approaches that Swiss brands have come up with in recent years, to meet the demand for more sustainable products, from the younger generations of consumers. A trend which has led to some fascinating innovations (see Box 2).


Figure 3. The MoonSwatch in the “Earth” variant (source: Swatch). [Click on the image for a larger image]

For example, nowadays, watch cases can be made with titanium produced from scrapped aircraft parts, recycled precious metals, or from recycling or upcycling plastic bottles collected from the ocean. There are also brands that offer watch movements assembled using restored Swiss movements. And dials are being offered with transparent ceramic glass instead of sapphire crystals, which required a smaller carbon footprint to be produced. 

Several brands propose watch straps made from recycled or upcycled fishing nets, plastic bottles, or mixing ceramic and bio-sourced plastic. Other companies have specialized in creating alternative leathers obtained by recycling leather scraps from the leather manufacturing production process, or vegan leather obtained from the bark of mulberry, cork and apple. In some cases, straps are entirely made of green waste and are fully compostable at the end of their lifecycle. 

In 2021, IWC launched TimberTex straps (80% plant-based, responsibly sourced from European forests), produced in Italy. The straps are made primarily of paper from responsible sources, called TimberTex. Luxurious in look and feel, the TimberTex straps have a soft and supple texture. Unlike synthetic leathers, which are often petroleum-based plastic, the material used in TimberTex straps is composed of 80% natural plant fibers. The cellulose used comes from Forest Stewardship Council (FSC) -certified trees grown in the non-profit organizations and responsibly managed European forests. The material is then manufactured in Italy using traditional papermaking techniques and colored with natural plant-based dyes. In 2022, the maison launched the MiraTex straps (made of plants and minerals including FSC-certified natural rubber and fillers like cork powder and mineral colorants).

Box 2. Production of sustainable raw materials by Panatere ([EPHJ21])

A Swiss company based in Saignelegier, Panatere is a pioneer in the production of sustainable raw materials. The company specializes in the production of 100% recycled and recyclable stainless steel, which is locally sourced from scrap steel from companies in the watchmaking or medical industry operating in the Swiss Jura region. The carbon footprint of Panatere’s recycling process is 10 times smaller than the standard process for the production of stainless steel. The company is also working on setting up a local solar oven in the Jura region with the objective to increase its offer of recycled and recyclable stainless steel from 50 to 200 tons per year and produce solar materials leveraging a network of partners within a range of 50 km around the company. Panatere has been finalizing its process of the production of solar raw materials since 2021 using a solar oven in the French Pyrenees, right in the middle of the Swiss Watch Valley. The use of the solar oven would enable further reducing the carbon footprint of the production of stainless steel to 165 smaller than the standard production (e.g. achieving an almost neutral carbon footprint).


Figure 4. The Solar Oven in the French Pyrenees (source: Panatere SA). [Click on the image for a larger image]

ESG Social Media analysis

To provide deeper insights into the impact of the sustainable initiatives in luxury watchmaking, we have performed an ESG sentiment analysis and news coverage analysis for each of the 15 brands that were in scope of the WWF report in 2018. Although the WWF report focuses largely on the Environment dimension, we have taken all three ESG dimensions into account to obtain a holistic view for each of the brands.

First, our research identified the “ESG Share of Voice” that is the number of times a brand is mentioned on social media in general versus how many occurrences are related to ESG. Next, we classified both categories of social media mentions as either positive, neutral or negative. This approach allowed us to assess both the relevance of ESG for each brand and whether ESG creates or supports the positive perception. Further, we conducted an analysis of spikes in ESG mentions to estimate the difference between overall ESG consumer sentiment and reaction of social media considering specific world events or particular business actions.

Overall insights

Regardless of the different brands and their ESG sentiments, we can summarize our research in three key observations:

  • The ESG Share of Voice is relatively low, ranging between 1% and 7%. This may suggest that customers show interest in sustainability, but that it is not a prevailing topic
  • There are eleven of the 15 brands where consumers have a more positive sentiment on the brand as a whole than their perception of the brand related to ESG
  • Social media spikes are strongly connected to specific ESG related events or company activities. The spikes are outliers in the dataset of social media mentions and impact the average ESG sentiment values


Figure 5. The results of the social media data analysis (KPMG analysis, 2022). [Click on the image for a larger image]

Category 1: High performers

Chopard, IWC and Omega have the highest share of positive ESG mentions. Those brands also demonstrate a positive delta when comparing the ESG sentiments to the overall brand perception. For example, out of the total social media mentions for Chopard, 31% is positive. When we filter the total number of social media mentions to only include those related to ESG, this number increased to 42%.

For those brands with a positive ESG performance, the impact of their performance is not always equally high. Continuing with Chopard as an example, their ESG share of voice is only 3% which means the largest part of their social media coverage is not related to ESG. Comparing this to IWC, for which 40% of all ESG mentions are positive, we can assume the impact of sustainability influences brand perception more as their ESG share of voice sits at 7%.

Although, there are many parameters that contribute to a successful ESG brand perception, the ESG sentiment high performers Chopard and IWC are also high performers in communication of dedicated ESG reports as mentioned earlier in this report. As such, it is likely that more transparency and communication have a direct and positive effect on ESG sentiment.

Category 2: Underperformers

The twelve remaining watch brands of the list have a lower performance on ESG sentiments. As mentioned earlier, eleven of these low performers have a less positive consumer sentiment regarding ESG as compared to the overall brand sentiment.

Within this group, brands such as Piaget and Vacheron Constantin score a relatively high ESG Share of Voice, which means that the impact of their relatively low positive consumer sentiment of ESG is greater than for the other brands.

Swatch appears to have a relatively high amount of negative ESG mentions. For Swatch, there has been a significant number of negative mentions related to sustainability and the buying process of MoonSwatch.

Impact of spikes in ESG mentions

When performing an analysis of ESG mentions across time, we see that the average number of ESG mentions per month is low. However, we also observe clear spikes in social media coverage when specific ESG related events or business updates occur. This suggests that the overall ESG sentiment of consumers is largely determined by these events, rather than by a continuous sustainability effort. From the sample in this analysis, social media spikes occur every three to four months and the duration of their impact typically lasts between three and five days.

Third parties can also influence ESG brand perception through social media. In some cases, potentially without knowledge of the watch company, watches are part of an ESG-related activity such as charitable giveaways. In these cases, the impact on ESG sentiment can be significant as these third parties often have a large social reach.


Figure 6. Example of ESG spike analysis (Meltwater and KPMG analysis, 2022). [Click on the image for a larger image]

For IWC, the average ESG mentions lies between zero and 25 per month. However, IWC organized two charity events in May 2022 causing significant social media spikes. IWC auctioned off watches that were worn by celebrities in a previous charity event on during the Big Pilot Charity Auction on May 6th. This event caused a social media spike with over 300 mentions on a single day, out of which 90% are classified as positive. As such, it is likely that IWC’s high performance on ESG sentiment as described earlier, is largely thanks to these events.


Figure 7. Example of ESG spikes caused by third parties. [Click on the image for a larger image]

The effects of third parties on ESG sentiment can be illustrated by for example Rolex. Similar as shown earlier, the average social media mentions are relatively low and stable. In this case however, the spikes are caused by social media activity from third parties such as celebrities or other companies.

Sustainability, do customers of luxury watchmaking care?

Our analysis suggests that there is a noticeable shift in consumer behavior and sustainability is becoming increasingly important in decision making of the consumer. However, there is mixed evidence regarding consumer demand for timepieces made in a sustainable manner. Some groups consider sustainability an absolute must, considering a luxury watches a necessity and therefore the production process should place a great emphasis on sustainability. Other groups remain more skeptical about the true impact of sustainably created watches, given the quantities are mostly very low and the watches are often inherited from generation to generation making them sustainable by nature.

The quest for quality, durability and high-quality accessible service appears to be equally important to consumers as sustainability of the product. The study suggests increased traction for secondary watch market, renting, co-owning and sharing of luxury watches. However, while these findings might indicate a shift of consumer preferences to sustainability, the recent shifts in demand and scarcity of the product availability might dilute the causation.

However, sustainability is becoming a part of the consumer journey. According to research, the consumer seems to consider sustainability during the information search and right at the purchase decision making point. Even though sustainability is being taken into account during the purchase process, the efforts of a brand to become more sustainable appear to still be a hygiene factor. The observation of the sentiment analysis suggests that the desire to purchase might not be triggered by sustainability efforts of the brand, although the consumer might change their mind if the brand isn’t active in pursuing sustainability goals. Furthermore, recent years of the pandemic seem to make the consumer more conscious of the environment as well as social impacts, such as child labor, gender equality, etc.

Sustainability plays a bigger role when the purchase is emotion driven and the consumer wants to feel good about their new product. However, the consumer is also looking for a high-quality product that sustains value for a longer period of time and if this consideration prevails over the emotional reasoning, the importance of sustainability of the product decreases.


Figure 8. Overview of news coverage for luxury watches (KPMG analysis, 2022). [Click on the image for a larger image]

The news coverage of luxury watches is generally positive. Neutral and negative mentions were related to the geopolitical unrest in Ukraine, this topic accounted for less than 6% of all the mentions however. The main topic driving 45 percent of the news coverage is the “new collections”. The second strongest driver of the news coverage, with a significant offset of 33 percentage points, is the topic “watchmaking exhibition”. “Sustainability” related topics are account for 9% of the mentions in the news.


The theme of sustainability has gained momentum in recent years. The rise of regulatory pressure urges companies to strengthen their sustainability efforts as well as reporting on its results. Among the stakeholders driving the change are not only the regulators but also activist investors and consumers themselves. This report reviews the efforts and communication of 15 biggest Swiss watch companies that were primarily assessed by the WWF in 2018. The WWF maturity analysis flagged that the majority of brands either had very little or no active communication about their desire to become more sustainable. We reviewed the progress of sustainability initiatives for the 15 companies in 2022 and combined the findings with social media and news coverage analysis.

Most brands have website with pages that are dedicated to sustainability and some of them are very active in ESG initiatives and communication. Some brands report on ESG according to international standards and are part of international sustainability initiatives, while others are not sufficiently transparent on how their supply chain affects the environment. The social media analysis shows that the ESG Share of Voice is relatively low. Although not necessarily correlated, it is likely that active ESG communication and transparency positively influences consumers’ ESG perception. Furthermore, a brand’s environmental initiatives may not be what prompts customers to buy a premium watch. Nevertheless, if the brand is perceived to be inactive in sustainability, the consumer may hold back with the purchase.


  1. The United Nations 2030 Agenda for sustainability ([UN15]) includes 17 SDGs, which form the international and universally applicable framework for sustainable development.
  2. FSC certification ([FSC22]) ensures that products come from responsibly managed forests that provide environmental, social and economic benefits.
  3. The PEFC ([PEFC22]) is a leading global alliance of national forest certification systems. As an international non-profit, non-governmental organization, we are dedicated to promoting sustainable forest management through independent third-party certification.
  4. STANDARD 100 by OEKO-TEX® ([OEKO22]) certified products have been tested for harmful substances to protect your health. This label certifies that every component of the product, from the fabric to the thread and accessories, has been rigorously tested against a list of up to 350 toxic chemicals.
  5. The Kimberley Process (KP) ([Kimb02]) is a multilateral trade regime established in 2003 with the goal of preventing the flow of conflict diamonds. The core of this regime is the Kimberley Process Certification Scheme (KPCS) under which States implement safeguards on shipments of rough diamonds and certify them as “conflict free”.


[CITE22] CITES (2022). What is CITES? Retrieved from:

[EPHJ21] Environnement Professionnel Horlogerie Joaillerie (2021, May 7). Panatere veut installer un four solaire dans la Watch Valley. Retrieved from:

[FSC22] Forest Stewardship Council (2022). FSC Standards. Retrieved from:

[Grun18] Grünenfelder, D., Starmanns, M., Manríquez Roa, T. & Sommerau, C. (2018). A precious transition: Demanding more transparency and responsibility in the watch and jewellery sector. Environmental rating and industry report. World Wildlife Fund. Retrieved from:

[Kimb02] Kimberley Process (2002). Kimberly Process Certification Scheme. Retrieved from:

[Llor21] Llorca-Ponce, A., Rius-Sorolla, G. & Ferreiro-Seoane, F.J. (2021, August 18). Is Innovation a Driver of Sustainability? An Analysis from a Spanish Region. Sustainability, 13(16), 9286. Retrieved from:

[OEKO22] OEKO-TEX® (2022). OEKO-TEX® Standard 100. Retrieved from:

[PEFC22] Programme for Endorsement of Forest Certification (2022). Standards and Guides. Retrieved from:

[Swat22] The Swatch Group AG (2022, March 17). Sustainability Report 2021. Retrieved from:

[UN15] United Nations (2015). Transforming Our World: The 2030 Agenda for Sustainable Development. Retrieved from:

[WDC22] World Diamond Council (2022). System of Warranties. Retrieved from:

[Yum21] Yum, A. (2021, April 19). How Sustainability is Driving Innovation in the Watch Industry. Luxuo. Retrieved from:

Control by Design: risk-free processes as the holy grail

Risk management is gaining an increasingly prominent role within organizations. In a rapidly changing environment, increasing digitalization and more stringent regulations regarding service delivery, good risk management is a challenge. For a more automated risk management, the term Control by Design is used regularly, not only within financial institutions but also as an important risk trajectory for the future at other organizations. But what does this term mean? And why should it be necessary to embrace and apply this way of thinking? This article explains the background and opportunities of Control by Design. It also looks at the application, the possible barriers and how to deal with them to make the concept more concrete.

This article is also a call to other organizations to exchange views. Please send an email to if you want to share your ideas.


Business processes change continuously. Optimization takes place, (sub) processes or IT systems are outsourced, new products or services are developed, and old products or services are discontinued but still need to be managed. Laws and regulations or (internal) policies are introduced or modified, new risks appear or existing risks are weighed differently. In addition, reorganizations take place, responsibilities and priorities shift. This translates into more complex processes, the implementation of (manual) workarounds in order to meet new requirements. All this often happens faster than many IT departments can manage. This is also reflected in the controls, where manual checks on the workarounds and exceptions continually drive up costs and include the risk that those manual checks are not carried out adequately.

The complex and constantly changing and more burdensome regulations mean that important risks can no longer be mitigated by manual control measures alone. Further, in addition to the increasing costs of the process itself, there is increasing pressure on monitoring. Assurance on the operation of the control framework is sought through increasing first-, second- and third-line controls, driven by the Three Lines of Defense (3-LoD) model. The costs of the manual work involved in implementing the control measures plus the costs associated with manually monitoring the operating effectiveness of the control measures lead to an ever-increasing cost of control.

Three Lines of Defense

The Three Lines of Defense model consists of three lines that together oversee the management of risk. The first line consists of managers and employees who are responsible for identifying and managing risks as part of their daily work. The second line provides support and guidance by offering guidelines, policies and control frameworks. In addition, the second line also takes care of monitoring to determine that the risks are correctly managed. Finally, the third line focuses on an independent review or audit of the control framework as a whole or parts thereof, including the activities of the first and second line. Often this role is fulfilled by an internal audit department ([CIIA21]).

Besides the complexity mentioned above and the increasing cost of control, we see increasing digitization. Financial institutions are increasingly serving customers online, adapting processes and IT systems. Customer journeys are designed and adapted and new systems are purchased and/or developed. Redesigning and re-implementing processes gives you an opportunity to also manage your risks differently or, better yet, to prevent them. By including the (process) risks as early as possible in your design, you can organize the control of these risks much more efficiently. Read “Control by Design”! A groundbreaking and new idea? Well, no, but it is one that needs to be put into practice in order to actually reduce the cost of control. To achieve this, we will first consider the definition of Control by Design and offer thoughts on how to embed it into the change processes of the organization. We will subsequently explore scenarios in case optimal Control by Design is not feasible, and we will conclude with a number of obstacles and pitfalls one may need to overcome during the implementation of Control by Design.

Control by Design: risk management as a fixed part of the (IT) development process

The term Control by Design is not new. And so-called Application Controls have also been used and implemented for quite some time. The benefits are clear. A well-programmed IT system will do the same thing every time, even on a Monday morning or Friday afternoon. In addition, in terms of monitoring, you don’t need to do labor-intensive customer document monitoring, but instead the Application Control can be tested during the implementation or system change. For the Application Control to continue to function, you can rely on well-designed IT processes (General IT Controls) to ensure that the system continues to do what it is supposed to do. Adequate General IT Controls guarantee a controlled system change process, effective authorization management, and assured system continuity ([ISAC11]). These elements are a prerequisite for determining that an automated control (Application Control) continues to do what it is supposed to do.

Yet within organizations we see that such automated control measures are not always used to their full potential. Several things can stand in the way of broad automation. One example is that the implementation of automated control measures can be complex, expensive and vulnerable to change. It may also be that these measures are not given sufficient priority in change processes because such change processes generally focus on realizing business value. For example, choices are made to automate only key risk mitigation.

The difference between Control by Design and reactively implementing Application Controls (or automating existing manual controls) where risks become manifest, is that Control by Design is about setting up a process in such a way that certain risks are controlled (prevented or mitigated) directly from the process design. This means that the process and the associated risks are the starting point of the risk mitigation, instead of the automation of already existing control measures. It is important to ensure good interaction between the process owner (who knows how his process is structured), the risk manager (who knows where the risks and controls manifest themselves in the process) and the IT specialist (who knows what systems and data are used in the process). By aligning the risk management process to the development process of a product and/or IT system (modification), it is ensured that identifying the root cause of the most important risks, and automating the associated controls, becomes a part of the organization’s standard change mechanism. When prioritizing the change calendar, make sure that it is clear which risk-related changes (e.g. implementing a hard input control) can be included in planned changes (e.g. modifying input screens). After all, it is cheaper to replace the sewer pipe if the street is going to be opened up anyway to install the fiber optic network.

The idea here is as, for example, Elon Musk mentions in his First Principles approach: go back to basics. When you set up the process from scratch instead of adapting an existing process, you are more likely to come up with a different and possibly better suited design. This works best in a greenfield situation, where design choices can still be made and less restrictions exist resulting from an existing system landscape. The reality is that those situations are rare. So you should strive for a situation where the change processes take into account the objectives of Control by Design by default. This article focuses on that challenge.


An example is offering a discount on a customer rate. Of course, this can be done by configuring manual discount authorization/approval levels in the system. A more efficient step, as well as less error-prone, is to let the system determine which customers are eligible for standardized discounts and to apply them automatically. And if the business operation can also work from fixed rates, then the process should already be set up so that discounting is not possible at all. The risk of incorrect or unjustified discounts is therefore enforced from within the process. Going back to basics: the (re)design of process and IT system.

Applying Control by Design in practice

To apply Control by Design, traditional risk management or process models remain in place. Indeed, for broad acceptance and proper operation, it is important to embed Control by Design into the models that are already used by the organization.

It is important to bring different disciplines together as much as possible: the process owner, risk management and the IT delivery partner. This is where two processes come together: the risk management cycle and the (IT) development process. It is in these processes where the Control by Design philosophy needs to be applied.

We recognize four important preconditions for the success of Control by Design. The first precondition is to apply Control by Design during the implementation of new IT systems and the digitization and/or adaptation of processes. The development process goes through the various phases of intake, analysis and determination of the requirements, in order to then build and implement these requirements. Whether you work according to a waterfall, agile or other development methodology, it always comes down to the fact that during the development process several steps of the risk management cycle are integrated, from identifying risks to mitigating and determining the monitoring strategy. In Control by Design, you want to align these steps and look specifically at where IT systems can be adapted to reduce certain risks or, better still, to eliminate them.

To do that, it must be clear which part of the end-to-end process is planned to be changed. To mitigate the risk, it is important to focus on the root cause of the risk. The BowTie and Five Times Why methodologies can be used to identify these root causes. The BowTie method breaks down the risk description into cause, event and effect ([Culw16]), after which elaboration on the cause can be achieved by asking several times why the risk arises. This is how you arrive at the final root cause ([Serr17]). If this root cause occurs in the part of the end-to-end process where a change is planned, Control by Design becomes particularly important. In order to be able to identify a risk, to perform the root cause analysis and to come up with the best approach to eliminate or (automatically) mitigate a risk in the process, the broad expertise of business, risk management and IT needs to be brought together at the right time during the change process.

This brings us to the second precondition; make sure that during the design you know where the key risks are located across the entire width of the business process. The end-to-end insight based on broad expertise is needed at that moment, because the actual root cause of a risk can occur in a completely different part of the process than where the focus lies at the moment of change. An example is when clients provide incomplete documents when requesting a product, which may result in incorrect advise or product approval. This risk can be mitigated in the closing phase by asking the client to submit these documents to complete the request, but carries the risk that the whole assessment and advise process needs to be reperformed to take the information of this documentation into account. Ideally, the cause of the risk can be eliminated in the intake phase, prior to the assessment and advise processes. With the end-to-end process approach, risks are identified across the process and system chain and control measures can be implemented at (or as close as possible to) the place where they arise. This prevents the duplicate implementation of control measures that mitigate the same risk and thus benefits efficiency. From the traditional risk analysis perspective, this step for Control by Design is of additional importance to shape the design in the right place and in a timely manner. You can replace the sewer pipe where the street opens up, but if the real problem is that far too much water needs to be drained, you’re better off replacing the pavement with urban gardens.

The third precondition is to standardize before you digitize. For Control by Design, the principle is that the more a process is standardized, the simpler the process and the easier it becomes to avoid a risk. This is not a new concept but it is an important basis, although it is not always possible. An indication of a lack of standardization is there being too many deviations/workarounds in the process. We will discuss this in more detail later in the article.

The fourth precondition is to have the right and accurate data to be able to use a properly functioning automated control measure (Application Control). It needs to be clear what data is needed at what point in the process. This data must be accurate in order for the control to function properly. After all, garbage in = garbage out. Data needs to be collected from reliable sources, after which the accuracy, completeness and timeliness of the data needs to be determined before its use as a basis for an application control.


Figure 1. Four preconditions for Control by Design. [Click on the image for a larger image]


Customer relationship management is important as a part of overall customer service. Does the customer still have the product that best matches his situation? In order to properly conduct customer relationship management, it is necessary to schedule customer contact in order to assess financial product suitability ,to record the notes of conversation and to plan the necessary follow-up actions. High workload and operational errors pose risks to this process. Using IT system support, several process risks can be reduced. CRM software builds in triggers for scheduling the customer appointments. During the appointment, the advisor walks through a workflow process within the IT system with the customer, completes the questions and automatically records the choices in the system. The report cannot be completed in the IT system until the advisor has provided their explanation of any exceptions or specific customer choices. The IT system then automatically saves the report in the customer file and e-mails it to the customer. Many actions are taken over by the IT system. The risk of not engaging in a timely conversation with the customer, not ensuring that all required questions are addressed, not having a record of the conversation, and not actually receiving the relevant information is reduced.

It looks simple on paper and the idea finds many supporters who recognize the benefits, not the least from the cost savings perspective. Who wouldn’t want to make more use of automated control measures to prevent manual work or make it impossible to make mistakes in the first place? However, the reality is different, especially in a more complex organization and a complicated IT landscape that has grown evolutionary. Without specifically taking into account the dilemmas raised by Control by Design, the chances of successful application are greatly diminished.

Some important things to consider in advance:

  1. Control by Design is not necessarily (just) automating the existing manual controls
    Manual controls in the process are performed in a different way than Application Controls or IT Dependent Manual Controls. For example, there may be more professional judgment involved, information needed to perform the control may have to reach the reviewer in different ways through different IT applications, information is recorded in documents instead of structured data, and so on. Automating the action performed by the controller is not the goal of Control by Design: ideally, the step should become redundant (e.g. through a preventive control at the right place in the process). This difference must be clear in order to avoid disappointment in the application of Control by Design and thus hinder its success.
  2. Control by Design is ineffective when there are too many deviations in the process
    A complex process is more difficult to control. When there are many product/process variations, it can be a lot of work to implement an automated, preventive control measure on all deviations that actually mitigate the risk in the process. Professional judgement necessary to perform a control and a lot of room for overruling business rules make it difficult to adequately mitigate risks via application controls. Theoretically, everything can be automated, but at irresponsible costs and with the result that the systems themselves become too complex.
    The better the processes are standardized, and the more product rationalization has taken place, the better the systems can be set up for preventive automated controls.


Figure 2. The highway. Control by Design standardizes the primary process and eliminates or monitors possible deviations that can bypass controls. [Click on the image for a larger image]

  1. Control by Design also takes change capacity and thus requires priority
    Implementing and applying Control by Design requires commitment and investment prior to the actual IT implementation, at the expense of the available change capacity. Agile development teams with overflowing backlogs steer towards realizing as much business value as possible. A conscious prioritization of the requirements of Control by Design is therefore necessary but not popular – the value only becomes apparent when avoiding manual activities the cost of which is usually not adequately weighed against the return of other changes prioritized in a sprint. Therefore, when implementing Control by Design, its rules should be enforced: i) deviations from the Control by Design principles and steps in the change process should be made visible; ii) deviations should require formal approval; and iii) temporary acceptance of deviations should be monitored to ensure the right priority on the backlog later on. For example, when an IT system change involves a manual check instead of removing the root cause, this is a deviation from the Control by Design principles and should thus follow the above mentioned steps.
  2. Combined insight into the end-to-end process, IT and risk helps to make the right design choices
    A key objective of Control by Design is that risks should be prevented where they arise. But where is that? End-to-end processes are often long and complex, and transcend the responsibility of individual teams – at the functional, infrastructure and IT application levels. Parts of the process or technology may have been outsourced. Other parts may be using legacy IT products. Making changes in such cases is often complicated, costly and not future-proof.
    In practice, it is difficult to bring all the necessary knowledge together to deliver the right insights. Process documentation may be outdated, incomplete or insufficiently detailed. There are few employees who can oversee the entire process and their time is scarce. A (key) risk analysis at process level with a good understanding of the root causes of risks is indispensable. The importance of involvement of the complete “triangle” of process, IT and risk with the aim to strengthen each other and speed up the development process cannot be stressed enough. Additionally, we emphasize the need to ensure enough time to properly map out the risks and their root causes.
  3. The responsibility for implementation of an IT change that addresses a root cause may differ from where the risk manifests itself
    Even if a solid risk analysis identifies a clear root cause and the necessary (IT) change to prevent or mitigate the risk, the IT change needed does not in all cases fall within the responsibility of the team that feels the impact of the risk.
    Other scrum/development teams have their own responsibilities and priorities. Implementing a fix on a root cause may not score high on their list at that point in time. As a result, quick fixes and workarounds are often implemented, which take the pressure off the necessity to tackle the real root cause and leads to suboptimal solutions (… and go back to item 3 on this list). The parks department doesn’t have time to realize the urban gardens at present, so maybe just replace the sewer pipe for now?

Control by Design Funnel as an alternative

At the beginning of the article, lowering the cost of control was mainly broken down into two parts. With the (automated) prevention of risk, control costs in the primary process are decreased. Another way to reduce the cost of control is by more effective monitoring of the effective operation of controls. Manual file checking is the most labor-intensive form of monitoring. The Control by Design Funnel (see Figure 3) can be applied. This funnel indicated that the highest possible level of (automated) risk control lies in the development process. A lower level should only be examined if higher levels are not possible or the benefits do not cover the costs.


Figure 3. Control by Design Funnel. [Click on the image for a larger image]

In order to apply the funnel properly, it is important to not only assess the control measures during risk analysis, but also to adopt a monitoring strategy. As mentioned in the introduction, we see that more and more assurance is sought within organizations by intensifying the testing of operating effectiveness of controls and monitoring whether certain risks still occur. Automated control testing (funnel option 2, see Figure 3) and smarter control testing by using data (funnel option 3) will in that sense contribute to reducing the cost of control. Requirements to enable this automated or smarter indicator-driven control monitoring need to be provided to the software development team as an outcome of the risk analysis, subsequent assessment of the Control by Design (im)possibilities and selection of the alternative according to the funnel.


In an ideal scenario, the risk of not having a record of a customer conversation is prevented by the CRM process as mentioned in the previous example. If automating the process to such an extent is unfeasible at the moment, one could consider automated monitoring to determine whether all the customer appointments conducted that week have resulted in a record saved in the customer file. If this monitoring cannot be automated either, then one can look at the next layer in the funnel, which is based on indicators. It has been established that the cause of an incorrect customer conversation record is a lack of time on the part of the advisor writing a report of his conversation with the customer. If the report is prepared within a day of the conversation, errors are almost never found. Should it take longer, the chance of a faulty record has grown exponentially. Thus, the time between the appointment and storing the record of the interaction is a quality indicator, a means of determining whether the control measure is working adequately. If the indicator shows that less than 95% of the advisor reports are saved within 2 days of the appointment, additional quality checks become necessary. Such monitoring does require that you are able to get the right data from the systems. The method of monitoring must therefore be included during the development process and introduced as a requirement during development. If these requirements are not included, often the only remaining option to assess whether the process is “in control” is the least favored, labor intensive level 4: manual file checking.

Conclusion: Control by Design is an important concept for cost savings and better risk management

There is no universal blueprint for implementing Control by Design. Organizations differ from each other, and the way Control by Design is implemented can therefore vary. This depends, for example, on time, the maturity of the organization and the willingness to embrace the concept. It is therefore important to work towards objectives that are achievable for a specific organization.

Control by Design is an important concept to better manage risks and reduce the cost of control. Implementing the concept sounds simple, but in practice it can be problematic. There are several challenges to be encountered. Implementing Control by Design requires priority, an end-to-end process perspective and the right expertise to be at the table. It is a long-term process to adopt Control by Design as an integral part of the IT change process. Rollout comes down to small evolutionary steps, rather than radical change. It is important to make the right choices in how to deal with the scarce IT capacity: make sure you only have to develop control measures once by applying them in the right place. The effort that is invested during IT development will be more than returned after implementation. Expensive monitoring can be avoided as well as labor-intensive manual file checks to see whether the process is running smoothly.

This requires good anchoring of Control by Design in existing IT development and risk processes. Make the steps and tools as concrete as possible and make it mandatory, measurable and visible.

In addition, there are often other initiatives in the organization associated with Control by Design. Examples are Security by Design and Privacy by Design, Business Process Management or implementations of agile software development methods. These initiatives can reinforce each other and accelerate the transition to Control by Design. So take advantage of this to join forces.

Do you recognize the desire to apply this, are you curious about the experiences or do you want to know more? We warmly invite you to exchange views with us.


[CIIA21] Chartered Institute of Internal Auditors (2021). Position paper: The three lines of defence. Retrieved from:

[Culw19] Culwick, M.D. et al. (2016). Bow-tie diagrams for risk management in anesthesia. Anaesthesia and Intensive Care 44(6), 712-718. Retrieved from:

[ISAC11] ISACA Journal Archives (2011, September 1). IT General and Application Controls: The Model of Internalization. ISACA Journal. Retrieved from:

[Serr17] Serrat, O. (2017). Proposition 32: The Five Whys Technique. In O. Serrat, Knowledge solutions (pp. 307-310). Retrieved from:

DORA: an impact assessment

Due to the rapid use of digitization within the European financial sector, the European Commission introduces the Digital Operational Resilience Act (DORA) to set the basis for digital resilience at financial entities. Financial entities will be required to improve their digital resilience through enhancing their IT risk management processes, incident handling, management of third parties, while also sharing cyber-related information and their experiences with their peer organizations, to strengthen the sector as a whole. One distinct feature of DORA is that it also brings new financial segments in its regulatory scope under the supervision of the European Commission.

Please note that updates have been made to this article after the initial publication, to make this article correctly reflect the latest developments on the DORA legislation and its timelines.


In the past few years, IT regulatory requirements on a European level have increased, due to increased use of IT and the risks it poses. In 2017, the European Banking Authority (EBA) announced its “Guidelines on ICT Risk Assessment under the Supervisory Review and Evaluation Process (SREP)”, soon to be followed by guidelines on PSD2, cloud service providers and ICT & Security. The European Insurance and Occupational Pensions Authority (EIOPA) published its guidelines on “information and communication technology security and governance” and “outsourcing to cloud service providers” in 2020, approximately around the same timing when the European Securities and Markets Authority (ESMA) published its “Guidelines on outsourcing to cloud service providers” in 2020. Just from the titles the overlap between these guidelines stemming from the European Supervision Authorities (ESA) is apparent. This triggers the question why each segment authority is operating in silos and reinventing the wheel instead of working together on a European scale. Apart from supervisory benefits, the financial institutions under supervision that operate in different segments of the financial sector would benefit in terms of time, costs and efforts from reporting on one single set of guidelines.

The European Commission (EC) seems to understand this notion and – in line with this – proposed a new regulation in 2020 that is directed at uniformity of the network and information security and operational resilience of the financial sector as a whole called the “Digital Operational Resilience Act” (DORA).

What does DORA entail?

DORA as a proposed regulation is part of the larger Digital Finance package of the European Commission. Its goal is to propagate, drive and support innovation and competition in the realm of digital finance, while effectively managing the ICT risks associated with it. Without a doubt, use of ICT in the financial sector has increased to the extent that ICT risks cannot be addressed indirectly as a subset of business processes. Moreover, it has seeped through to the different financial services ranging from payments to clearing and settlement and algorithmic trading.

On top of that, ICT risks form a consistent challenge to the operational resilience and stability of the European financial system. Since the financial crisis of 2008, ICT risks have only been addressed indirectly as part of operational risks and did not fully address digital operational resilience. Existing legislation by European Supervision Authorities overlaps too much as each of these authorities have their own IT framework for their segments, poses operational challenges and increases the costs of risk management for certain financial institutions that operate in different segments and therefore create a level playing field.

DORA aims to improve the alignment of financial institutions’ business strategies and the conduct for ICT risk management. Therefore, it is required that the management body maintains an active role in managing and steering ICT risk management and pursue a level of cyber hygiene.

This article will dive into the five pillars of DORA and explain these in further detail and provide a comparative analysis of the different types of entities and the extent of existing ICT control frameworks that cover the contents of the DORA regulation and the corresponding gaps. The article concludes with a general roadmap of what actions financial entities can take to fulfill the requirements out by DORA.

Pillars of DORA

The Digital Operational Resilience Act (DORA) consists of the following five pillars:

  • ICT risk management requirements. In order to stay up to date with the quickly evolving cyber threat landscape, financial institutions should set up processes and systems that minimize the impact of ICT risk. ICT risks should be identified on a continuous basis from a wide range of sources and addressed through internal control measures, disaster and recovery plans to safeguard the integrity, safety and resilience of ICT systems as well as physical infrastructures that support the ICT processes within the business.
  • ICT-related incident reporting. DORA prescribes to set up appropriate processes to ensure a consistent and integrated monitoring, handling and follow-up of ICT-related incidents, including the identification and eradication of root causes to prevent the occurrence of such incidents.
  • Digital operational resilience testing (DORT). Capabilities and functions within the ICT risk management framework require periodical assessment to identify weaknesses, deficiencies and gaps and implementation of corrective measures to solve these. Specific attention has been given to “Threat-Led Pen Testing” (TLPT) which enables financial entities to perform penetration testing based on the threats they are exposed to.
  • ICT third-party risk. Due to increasing use of ICT third-party providers, financial entities are required to manage ICT third-party throughout the lifecycle (from contracting until termination and post-contractual stages) based on the minimum requirements prescribed in DORA.
  • Information sharing agreements. In order to raise awareness and grow, the regulation gives room to financial entities to exchange cyber threat information and intelligence.


Figure 1. House of DORA. [Click on the image for a larger image]

DORA applies to the following financial entities. It can be noted that certain types of entities are the more mature and traditional entities that have been in scope of previous European ICT-related regulations. These concern the DNB Good Practice Information Security for banks, insurers and pension funds and EBA guidelines on Outsourcing and ICT & Security Risk Management for banks. At the same time, DORA introduces new types of entities that come into scope of an ICT regulation and are subject to it for the first time, due to their (in)direct involvement in the European financial processes. These include administrators of critical benchmarks like Moody’s, insurance intermediaries and ancillary insurance intermediaries (e.g., telecom companies that sell insurances on cell phones as by-product; see Table 1).


Table 1. Scope of applicability. [Click on the image for a larger image]

DORA has passed the proposal phase on July 13 this year, when the Economic and Monetary Affairs Committee gave its approval for implementation of DORA. On November 9, the European Parliament will vote on this legislation. The expected planning is that per ultimo 2022, DORA will be finalized, which kicks off the two-year implementation period during which financial institutions are expected to take measures to implement DORA. Per ultimo 2024, compliance with DORA is required. One exception to this timeline is the implementation of “Threat-Led Pen Testing” (the “Digital operational resilience testing (DORT)” pillar in Figure 1), as this has a deadline per ultimo 2025 as the requirements are a bit technical in nature.


Figure 2. Timelines DORA. [Click on the image for a larger image]

Following the introduction of DORA, including the scope and timelines of implementation, as shown above, the next sections contain more details. We will start with an explanation of the requirements for ICT risk management.

ICT risk management

The ICT organization is subject to fundamental ICT risk management. The aim of DORA is to establish ICT risk management to realize (permanent) identification of risks and their sources, performance of proper follow-up and the setup of protection mechanisms to minimize the impact of ICT risks. Realizing this is subject to ICT governance and establishing a risk management framework which the EC describes as principle and risk based ([ECFC20]).

ICT governance & standards

The overall responsibility for ICT risk management lies with the management body, who is also required to receive regular ICT training. Management is required to play a critical and active role in directing the guardrails for ICT risk management.

DORA does not propose specific standards to meet the requirements as part of the ICT risk management. However, DORA does aim for a harmonized guideline subject to a European supervisory system. ICT Governance lies at the base of realizing this.

The ICT Governance’s function’s purpose is to design the accountability and process for the development and maintenance of an ICT risk management framework as well as the approvals, controls and reviews to complement, for example, ICT audit plans. Most important is the definition of clear roles and responsibilities for all ICT-related functions including their risk tolerance levels.

Also subject to the governance is the periodicity of testing and identification of weaknesses or gaps and potential mitigating measures. It is not determined yet which standard or which controls should be tested, besides that incident handling and ICT third-party risk management require explicit follow-up. Hence, the scoping and implementation of the right controls requires attention.

Scoping and applying ICT risk management with DORA

For the scoping and implementation of an ICT risk management framework we will elaborate on the scope of assets and the proportionality in relation to existing controls.

For scoping, DORA refers to the management of ICT risk management in a broad sense, covering aspects such as business functions and system accounts. However, also supporting information assets should be taken into consideration. This means that IT support tooling for the execution of a control should be marked as applicable for ICT risk management. The scope therefore extends beyond core business applications. An example is an identity access management (IAM) tool used for the automatic granting of authorizations to users. In this case, the IAM tool should be subject to ICT risk management to ensure that the risk of unauthorized access to the core business application is mitigated.

Besides the scoping of assets, there are requirements which the regulation emphasizes. This concerns subjects such as ICT incident handling and ICT third-party risk management. Besides the emphasis on these areas, there are already many other regulations to comply with. This triggers the proportionality discussion.

With regards to proportionality, the Dutch Central Bank (DNB) already noted in earlier publications that regulation and supervision require alignment of the size and complexity, but foremost the risks of financial institutions ([DCB18a], [DCB18b]). With DORA, microenterprises already benefit from more flexibility. Also, DORA’s proposal describes that tailored risks and needs depend on the size and business profile of the respective financial institution ([EuCo20]). Based on our experience, we already see many supervisory requirements for the Dutch financial services sector. This may result in not redoing the work for certain areas. Financial institutions that already implement DNB’s good practice for information protection might already have the correct measures in place. DNB’s good practice for information protection is not a marked standard by the EC, however, we see that relevant aspects in relation to DORA have been covered. The only remark is that the good practice is principle-based rather than risk-based ([DCB19]).

We therefore propose the following steps to establish proper ICT risk management in relation to DORA:

  1. Determine your scope of IT assets
  2. Identify the risks related to DORA
  3. Identify the impact based on confidentiality, integrity and availability
  4. Identify the source of the risk based on whether the risk is driven by human, process, technology or compliance
  5. Determine the likelihood and impact based on low, medium or high for each risk
  6. Link the risk to existing/implemented controls and determine your residual risk for follow-up

The incident handling process can facilitate adequate identification of risks on a continuous basis. Having the right data and reporting structure in place enables the organization to perform analyses and identify which new possible risks arise from IT.

ICT-related incidents

ICT-related incident management process

Many organizations are used to having an incident management process in place. The goal of this reactive process is to mitigate (remove or reduce) the impact of ICT-related disruptions and to ensure that ICT services become operational and secure in a timely manner.

With DORA, financial entities will establish appropriate processes to ensure a consistent and integrated monitoring, handling and follow-up of ICT-related incidents. This includes the identification and eradication of root causes to prevent the occurrence of such incidents. Potentially, financial entities need to enhance their incident management process to align with the minimum requirements.

The incident management process should at least consist of elements shown in Figure 3.


Figure 3. ICT-related incident management process. [Click on the image for a larger image]

The level of formalization of the ICT-related incident management process is different per financial entity. The more the process is formalized, the more likely an incident ticketing system is in place. Using an incident ticketing system facilitates the organization to record and track incidents as well as monitor the timely response by authorized staff.

Figure 4 gives an example of roles and responsibilities involved in the incident management process. It can be noted that the roles involved in the incident management process cover the full range of the organization, as incidents originate at different points in the organization and the resolution takes place at different points.


Figure 4. Roles in the incident management process. [Click on the image for a larger image]

Based on our experience, we see that most of the financial entities already have a similar incident management process in place. As such, we expect the efforts to adjust to the DORA requirements for this part will be limited, as financial entities in general already have an incident management process in place.

Classification of ICT-related incidents

In case there are multiple ICT-related incidents, priorities must be determined. The priority is based on the impact the incident might have on the operations and on the urgency (to what extent is the incident acceptable to users or the organization). Many organizations already use criteria to prioritize incidents.

DORA describes that financial entities will classify ICT-related incidents and determine their impact based on the criteria in Figure 5.


Figure 5. Impact assessment criteria. [Click on the image for a larger image]

The above-mentioned criteria will be further specified by the Joint Committee of the European Supervisory Authorities, including materiality thresholds for determining major ICT-related incidents which will be subject to the reporting obligation (see Figure 6). Additionally, this committee will develop criteria to be applied by competent authorities for the purpose of assessing the relevance of major ICT-related incidents to other Member States’ jurisdictions.

Based on our experience, we see that most of the financial entities already apply a base of criteria to prioritize ICT-related incidents. We expect that all financial entities need to enhance this base of criteria to align the DORA requirements for this part, however this is something that is doable.

Reporting of major ICT-related incidents

When a major ICT-related incident occurs, financial entities will be subject to report these to the relevant competent authority within the time-limits shown in Figure 6.


Figure 6. Reporting timeline incidents. [Click on the image for a larger image]

The major ICT-related incidents may also have an impact on the financial interests of service users and clients of the financial entity. In that case, the financial entity will, without undue delay, inform their service users and clients about the incident and inform them as soon as possible of all measures which have been taken to mitigate the adverse effects of such incident.

At this moment, we note that reporting of major ICT-related incidents to relevant competent authorities and to service users and clients is not formalized. As such, KPMG expects that implementing a formalized reporting process on major ICT-related incidents will take effort for every financial entity. The next section explains about the requirements on Threat-Led Pen Testing.

Threat-Led Pen Testing

One of the key pillars of DORA is to perform digital operational resilience testing on a periodic basis. The requirements stated in the Act support and compliment the overall digital resilience framework and provides guidelines to financial institutions for scoping, testing, and tracking of ICT risks. The requirements of this testing can be classified and is explained below.

The financial entities should follow a risk-based approach to establish, maintain and review a comprehensive digital operational resilience testing program, as per the business and risk profiles. The resilience tests of the digital operational resilience testing program can be conducted by a third-party or an internal function and should at least contain the following and at least on a yearly basis:

  • Vulnerability assessments
  • Open-source analysis
  • Network security assessments
  • Physical security reviews
  • Source code reviews
  • Penetration testing

Apart from the testing program, the entities also have to perform vulnerability assessments before any new deployment or redeployment (or major changes) of critical functions, applications and infrastructure components.

In addition to the general tests mentioned above, DORA also states that advanced penetration tests, such as Threat-Led Penetration Tests (TLPT) meaning penetration testing adjusted to the threats the financial entity faces (i.e., a payment organization should perform penetration testing on their payment platform as threats on there are high), should also be performed at least every three years on the critical functions and services of a financial entity. The following points should be considered while performing these tests:

  • The scope of the TLPT will be determined by the financial entity itself and validated with the competent authority. The scope will contain all critical functions and services – including a third party.
  • TLPT performed should be proportionate to the size, scale, activity and overall risk profile of the financial entity.
  • EBA, ESMA and EIOPA will develop draft regulatory technical standards after consulting the ECB and taking into account relevant frameworks in the Union which apply to intelligence-based penetration tests.
  • Financial entities should apply effective risk management controls to reduce any type of disruptive risks which affect the confidentiality, integrity or availability of data and assets.
  • Reports and remediation plans should be submitted to the competent authority, which shall verify and issue an attestation.

DORA also places specific demands with regard to the testers performing the Threat-Led Pen Testing. Reputation and suitability are key combined with the required expertise and level of skill. Moreover, testers are certified by an accreditation board (e.g., ISACA) valid in the member states. If testers from external parties are included, the same requirements apply. However, when using external parties, professional indemnity insurances should be in place to manage risks of misconduct and indemnity and an audit or independent assurance is needed on the sound management of protection of confidential information used as part of the testing.

Apart from internal control, emphasis is placed on managing third parties too. This will be explained in the next section.

ICT third-party risk

The use of ICT third-party providers is prevalent in the financial sector. This ranges from limited outsourcing for data hosting services at external data centers to the more extensive outsourcing where use of IT systems and software is cloud-based or based on the Software-as-a-Service (SaaS) model, with different types of outsourcing between the two extremes.


Figure 7. Spectrum of outsourcing. [Click on the image for a larger image]

DORA’s approach towards ICT third-party risk is based on the perspective of financial entities managing ICT third-party providers throughout the entire lifecycle from the contracting to post-termination stage. This means a more holistic process than just monitoring the achievement of service level agreements and assurance reports received from the ICT third-party providers. This perspective is similar to that of the Outsourcing Guidelines of the European Banking Authority (EBA) ([EBA19b]).


Figure 8. Lifecycle ICT third-party service provider management. [Click on the image for a larger image]

At the same time, DORA does propagate the principle of proportionality when implementing measures to comply with DORA.

DORA defines proportionality for outsourcing as follows ([EuCo20]):

  1. “scale, complexity and importance of ICT-related dependencies” and;
  2. “the risks arising from contractual arrangements on the use of ICT services concluded with ICT third-party service providers, taking into account the criticality or importance of the respective service, process or function, and to the potential impact on the continuity and quality of financial services and activities, at individual and at group level.”

The main changes lie in the processes around pre-contracting, contracting and termination.

General requirements

Just like other regulations on outsourcing, DORA places responsibility for the results from the business process (impacted or not) by outsourcing at the financial entity, regardless of the extent of outsourcing. Financial entities are also expected to have proper insight in their ICT third-party providers and delivered services by properly maintaining this information in a so-called “Register of Information”. The level of detail of this register should explain the difference between ICT third-party providers that deliver services that cover critical or important functions and those that do not. Where needed, the national competent authority (e.g., AFM and DNB) may request (parts of) the register of information to fulfill their supervisory role.

(Pre-)contracting requirements

In the context of DORA, the requirements that need to be taken into account when selecting an ICT third-party provider increase significantly compared to the situation now (see Figure 8).

The reporting process as mentioned in Figure 8 is the same as the existing policy of the Dutch Central Bank (DNB) that requires financial entities to notify DNB in case the entity is planning to enter a contractual agreement with a third-party service provider for any critical activities or with a cloud-provider ([DCB18a], [DCB18b]).

In addition to the list in Figure 8, to guide the financial entities, the European Supervision Authorities (ESAs) together will also designate and annually update the list the ICT third-party service providers that they view as critical for financial entities. The designation of “critical ICT third-party service providers” is among others based on:

  • the systemic impact on the financial entities in case of failure of the ICT third-party service provider;
  • number of financial entities (globally or other systemically important institutions) relying on a certain ICT third-party service provider;
  • the degree of the substitutability of the ICT third-party service provider;
  • number of countries the ICT third-party service provider provides service to financial entities;
  • the number of countries of which the financial entities are operating using a specific ICT third-party provider.

As part of the pre-contracting, the following assessments and checks need to be made with regard to the ICT third-party provider in order to enter the contractual agreement:

  • Whether the contractual agreement concerns the outsourcing of a critical or important function;
  • Supervisory conditions for contracting are met;
  • Proper risk assessment is performed, with attention to ICT concentration risk;
  • Proper due diligence as part of the selection and assessment process;
  • Potential conflicts of interest the contractual agreement may cause;
  • The ICT third-party service provider that complies with the appropriate and the latest information security standards;
  • Audit rights and frequency of audits at ICT third-party provider needs to be determined based on the financial entity’s risk approach;
  • For contractual agreements entailing a service with a high level of technological complexity (for instance software using algorithms), the financial entity should make sure it has auditors (internal or external) available that have the appropriate skills and knowledge to perform relevant audits and assessments;
  • In case a financial entity is planning to outsource to a third-party service provider that is located in a third country, the entity needs to make sure that the third country has sufficient laws in place regarding data protection and insolvency and that these are properly enforced.

DORA places significant emphasis on ICT concentration risk and defines it as follows ([EuCo20]):

  • Contracting with an ICT third-party service provider that is not easy to substitute for another provider, or;
  • Having multiple contractual agreements with the same ICT third-party service provider or a tightly knit group of ICT third-party service providers.

Termination requirements

DORA requires financial entities to terminate contractual agreements with ICT third-party providers under certain circumstances:

  • The ICT third-party provider breaches applicable laws, regulations or contractual terms;
  • Circumstances or material changes arise that can potentially impact the delivered services to the extent that performance alters and impacts the financial entity;
  • Weaknesses in the overall ICT risk management of the ICT third-party provider are identified that can impact the security and integrity of confidential, personal, or sensitive data;
  • Circumstances arise that result in the national competent authority not being able to effectively supervise the financial entity as a result of the contractual agreement.

Proper analysis of alternative solutions in the pre-contracting stage and development of transition plans are needed to be able to sustain business operations after terminating the contract with the ICT third-party provider.

Future outlook

Overall, DORA results in a significant increase of requirements for managing ICT third-party providers, as it requires management of ICT third-party service providers throughout the lifecycle from pre-contracting till post-exiting.

The current state of management of ICT third-party service providers at financial entities is focused on due diligence procedures, service level management and analysis of assurance reports received from these ICT third-party providers (the traditional way of “managing ICT third-party providers”). Moreover, financial entities also experience difficulties in providing insight into all the relevant ICT third-party service providers with the level of detail that DORA requires. Most of the time, recording information of ICT third-party service providers is limited to the larger and more critical ICT third-party service providers. KPMG’s view is that of all the DORA pillars, it is expected that compliance with the requirements to manage ICT third-party providers will require the most effort from financial entities due to the widest gap between the current and future required states. Financial entities have to review their processes per lifecycle phase and expand current or implement new procedures and controls to ensure the inclusion of the DORA requirements. Large efforts lie in the creation of the “Register of Information” as within most financial entities contracts with ICT third-party providers are dispersed over the organization and management of these contracts takes place in a decentralized manner. Getting all this information together in one overview is an ardent task.

Information sharing agreements

All operating financial entities experience information- and cybersecurity threats one way or another. Most of the time, the threats are also similar in form and nature, like common network and system vulnerabilities, hacks and malware. Overall, each financial entity battles the same threats, some quicker or more adequate than the other because of differences in size, experience or other factors.

Reasoning from this situation, DORA prescribes financial entities to form communities and exchange information amongst themselves on cyber threat information. This includes indicators of identification, compromise, tactics, techniques and procedures on how to prevent and/or recover from the threat.

However, there are certain conditions to forming such information sharing agreements. Forming such groups should be focused on enhancing the digital operational resilience and increasing awareness of the cyber threats and how these can be identified and resolved. At the same time, conditions for participation should be set, and data and information exchanged should be protected. Lastly, the national competent authorities should be notified when such information agreements are formed.

In the current landscape, we note that there are working groups between financial entities in the banking and insurance segments, but these are broader in nature and directed at exchanging information and not specifically at cyber threat information.

The current state of DORA

As mentioned in the introduction, DORA scopes in many segments that are new to any IT regulation and have little to no experience in translating and implementing IT requirements into their organizations. At the same time, there are a number of segments that have had their fair share of experience with IT regulations through the national competent authority (being the Dutch Central Bank ([DCB19]) and European Supervisory Authorities ([EBA19a], [EBA19b], [EIOP20]) and are more mature in governing IT in their respective organizations which include banks, insurers and pension funds.

This dynamic creates a split in terms of the effort needed to comply. More mature organizations have the capabilities to bridge the gap based on past experiences, whereas less mature organizations also have to build on their capabilities to translate IT requirements into their organization.

The analysis in Figure 9 provides a detailed view of what the situation is like. The five pillars of DORA are plotted against the different segments in scope. Per segment it is described whether there are any existing IT regulations/frameworks in those segments that overlap with DORA to determine an indication of the effort required to comply with DORA. A digit of 1 means that there is one existing IT regulation or framework that overlaps with the requirements in the respective DORA pillar, whereas 2 means there is an overlap with two existing IT regulations or frameworks.


Figure 9. Analysis of mapping of DORA pillars vs. segments. [Click on the image for a larger image]

Based on the analysis above, three specific observations can be made:

  1. What becomes immediately apparent from this analysis is that there are certain sectors (e.g., credit rating agencies, benchmark administrators, crowdfunding organizations) that lack IT frameworks to govern IT, which are being subjected to IT regulations for the first time and therefore do not have any experience in translating IT regulations into controls within their organizations. The expectation is that financial entities in this segment will have to undertake considerable efforts to comply with DORA requirements.
  2. At the same time we note that all segments across the board do not have any good practices or controls in place through existing regulations that address the information sharing agreements. The requirements for “information sharing agreements” are not the hardest set of requirements within DORA that needs to be complied with. As mentioned earlier, some informal working groups among banks, insurers and pension funds already exist, and adding the requirements from DORA would most probably require little effort.
  3. Lastly, segments that already have experience in implementing IT regulatory requirements such as credit institutions, insurers and pension funds did this through frameworks such as the DNB Good Practice Information Security, EBA Guidelines on ICT & Security Risk Management, EBA Guidelines on Outsourcing and EIOPA ICT Guidelines. However, these financial entities still have to undertake some effort to comply with DORA. Like for ICT risk management there are additional requirements not covered by the existing regulations/frameworks and the guidelines of the EIOPA framework for insurers and pension funds limit outsourcing to contract and service level management only. In the same manner, the reporting of ICT-related incidents and Threat-Led Pen Testing is not common practice yet and will therefore also require efforts for proper implementation – although to a lesser extent compared to newly regulated financial entities.

Roadmap to compliance

If we sum up all the requirements discussed in the previous sections, we note that while some elements are entirely new, there are also elements that represent an add-on to existing practices. The requirements for managing ICT third-party risks and information sharing agreements are entirely new, whereas IT risk management and ICT incidents represent the add-ons to existing topics. All in all, there is quite a lot to comply with for DORA. Figure 10 gives a summary of the requirements for each of the DORA pillars, which are new to financial entities.


Figure 10. Compliance roadmap. [Click on the image for a larger image]


DORA increases the attention on ICT used by financial institutions . As discussed in this article, DORA focuses on five pillars: ICT risk management, ICT-related incident reporting, Digital operational resilience testing, ICT third-party risk and Information sharing agreements. The scope of financial institutions to whom this applies has been broadened. Besides the traditional financial institutions, such as banks and insurance companies, crypto-asset service providers are also required to comply with the guidelines and require more formalization since no current standards are published yet for crypto-asset service providers. Hence, the current state of maturity may vary between the types of organizations regarding compliance with the five pillars. This also triggers the proportionality discussion and requires revisiting of the financial entities’ current state to determine to what extent new or extra measures should be taken.

KPMG’s view is that DORA will increase the regulatory pressure and require compliance with new additional requirements. This has multiple reasons. First of all, DORA is a European IT regulation that will bring extra pressure and impact as it will apply as a law and brings them under the supervision of the European Commission. Therefore, financial entities will have to comply with a law and not being able to comply will be viewed as Non-Compliance with Laws and Regulations (NOCLAR) with potential legal implications.

Secondly, DORA introduces entirely new requirements, that will require the necessary additional efforts to implement within the organization, including the redefinition of internal processes (ICT third-party management) and formation of new processes (information sharing agreements). For financial entities that do not have much experience with complying with IT regulations this will be an arduous task.

Thirdly, a large part of the financial entities already has to comply with many different IT regulations/guidelines. Financial entities can therefore experience the so-called “regulatory fatigue” which may impact their overall level of compliance.

KPMG is of the opinion that financial entities should start assessing the impact of DORA on their organization as soon as they can, in order to effectively utilize the implementation timeframe of one year and achieve compliance with DORA by the end of 2024.


[DCB18a] Dutch Central Bank (2018, June 25). Good practices beheersing risico’s bij uitbesteding. Retrieved from:

[DCB18b] Dutch Central Bank (2018). Proportioneel en effectief toezicht. Retrieved from:

[DCB19] Dutch Central Bank (2019). Good Practice Informatiebeveiliging. Amsterdam: DNB.

[EBA19a] European Banking Authority (2019). EBA Guidelines on ICT and security risk management. Paris: European Banking Authority.

[EBA19b] European Banking Authority (2019). EBA Guidelines on outsourcing arrangements. Paris: European Banking Authority.

[ECFC20] European Commission First Council Working Party (2020, September 30). Digital Operational Resilience Act. Retrieved from:

[EIOP20] European Insurance and Occupational Pensions Authority (2020). Guidelines on information and communication technology security and governance. European Insurance and Occupational Pensions Authority.

[EuCo20] European Commission (2020). REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL on digital operational resilience for the financial sector and amending Regulations (EC). Brussels: European Commission.

[EuPa22] European Parliament (2022, March 24). Legislative Train Schedule: Digital Operational Resilience for the Financial Sector. Retrieved from:

ESG is here to stay: is your policy management framework ready?


The world has experienced continuous change over the last few years, with it sometimes being difficult to know where the focus should be placed. The newest change facing the world has been brought about through generational shifts and increasing climate concerns: environmental, social, governance (ESG). From the introduction of a standard EU taxonomy for ESG to ESG data challenges (see [Zhig22] and [Delf22] respectively), it has become the buzzword of 2021/2022 for organizations and governments globally. However, those in ethics and compliance functions understand that ESG is not a new concept. In reality, this is more of a resurgence of concepts that have been combined due to their interdependence and growth from “nice to have” into regulatory obligations.

The ESG challenge

The United Nations Principles for Responsible Investment define ESG as shown in Figure 1 ([UNPR18]).


Figure 1. United Nations Principles for Responsible Investment definition of ESG. [Click on the image for a larger image]

The breadth of these definitions may be daunting for those functions tasked with developing successful ESG strategies in their organizations. What’s more, it challenges – and largely prohibits – the traditional approach by organizations to delegate an emerging risk or legislative change to a single function in accordance with their risk framework. Implementing and managing ESG successfully will require an integrated approach that stretches across borders and areas of expertise.

Regulators driving change

ESG-centered regulatory guidance and obligations have steadily grown over the past few years, and there are expectations that these regulations will come with teeth ([Roge19]); a key factor in driving real change. To date, regulations have been largely targeted at the sustainable investing and financial reporting obligations, supply chains or third-party risk and diversity requirements. For example, the EU Taxonomy was released to establish a common language when discussing and reporting on sustainability topics and metrics ([Link22]). The Dutch Central Bank (de Nederlandsche Bank (DNB)) has also taken steps to drive change by monitoring the level of ESG commitment in the financial sector. As of January 2022, “Climate-related risks are now also part of the fit and proper assessments of (co)policymakers of banks, insurers and pension funds. The financial undertaking in question must include in its screening application the candidate’s knowledge and experience with regard to such risks. DNB amended its suitability matrices to explicitly include this.” ([Link22])

Apart from new requirements, regulators like the United States Department of Justice (DoJ) have also chosen to reiterate existing obligations that remain relevant for the success of governance, risk, and compliance (GRC) frameworks. The updated DoJ guidance on evaluating corporate compliance programs is one such example that would also support a sound ESG strategy. As noted by [Bell20], “the adequacy of compliance programs is frequently relevant in civil enforcement brought by federal agencies such as the United States Environmental Protection Agency (EPA) and state environmental enforcers … and are generally recognized as foundations for effective environmental risk management”. This suggests that while the onset of new regulations will require change, organizations should also utilize their compliance frameworks to approach ESG needs in an integrated manner.

Where should you start?

With the increase in regulations and societal demand, organizations are seeking solutions to implement ESG into their organization. As a first step, conducting a materiality assessment of ESG topics will support the focus on the areas which are most relevant and impactful to build the ESG strategy ([KPMG20]). Through existing frameworks, organizations can bring their strategy to life by tailoring their ESG approach to what works for their organization without causing significant business disruptions in the process.

A policy management framework is one such framework that is both foundational and a connector between topics. Policies and procedures are the resource that organizations use to set common standards across their organization and support the realization of the organization’s mission, vision, values, and strategy ([Nave21]). The policy management framework is the resource to ensure that those standards are communicated, the roles and responsibilities concerning the standards are understood, and that the designated metrics are monitored and reported accordingly; all crucial elements for the success of ESG ([KPMG20]). Traditionally, siloed topics also naturally converge within the policy management framework. This supports a cross-functional approach to interdependent risks – of which ESG has in abundance.

Successful policy management frameworks should include at least the areas mentioned in Figure 2 to be effective and efficient.


Figure 2. KPMG Policies and Procedures Management Framework. [Click on the image for a larger image]

The policy management framework should build upon existing fundamentals that are in place in the organization. When bringing the policy management framework to life, organizations should ensure consistency amongst policies, accuracy to relevant laws and concerns, relations between policies and concepts, and the application of a risk / value-driven approach. Moreover, multinational organizations should ensure that the global framework accounts for local regulatory requirements and association to the global policies, as this is often a where misalignment can occur.

Reinvent or refresh?

Once an organization completes the materiality assessment and sets their ESG strategy, they need to build a solid governance structure and process to maintain it. Having a mature policy management framework will provide a standard template for ESG to be incorporated into the organization like other emerging risks. Leveraging regulatory monitoring and change management within the policy management framework would enable swift mapping of existing topics and functional areas to ESG, thereby identifying alignment opportunities and in-house expertise. For example, it may be the case that the organization already has established policies on the focus areas of their strategy. These could be refreshed to specifically tie in the ESG strategy, rather than creating a new set of ESG policies and procedures.

However, if an organization has treated policy management as an administrative necessity, further work will be required to be successful with ESG. As noted by [Doct21], “without effective policies in place, organizations will struggle to follow through with their ESG values as well as fail to effectively report.” Apart from an unrealized strategy, ineffective policy management can also result in increased legal costs and regulatory scrutiny. Therefore, organizations wishing to implement their ESG strategy should first review their policy management framework to ensure that the foundation is solid.

We have supported a variety of organizations in strengthening their Policy Houses and associated policy and procedure management frameworks. In one such case, we assisted a large financial services organization in establishing a meta-policy which detailed the overall framework approach, including governance, policy lifecycle, training and communications, as well as ongoing monitoring and effectiveness reviews. The benefits for that organization were to leverage a structured framework with sufficient documentation and tool-enabled to enable consistency for all core laws and topics to be covered based on their risk appetite and strategy. The organization successfully moved from a rule-based approach to a value-driven approach. This supports the overall understanding of and adherence to policies and procedures and fosters the desired culture.


Strong policy management frameworks lay the foundation for risk management. Organizations without this are likely to experience ESG implementation that is siloed and has overlapping existing risk areas, as well as a lack of structured monitoring to support compliance with extensive ESG regulations. So, from stakeholders and CEOs to compliance officers and general counsels, the decision makers and responsible persons across any organization should take stock of their policy management frameworks to prepare for ESG. A few questions to consider:

  • Have you invested in your framework recently?
  • Is your framework currently effective?
  • Do resourcing constraints point towards the opportunity to automate?
  • Is your framework sufficiently integrated to manage the multi-faceted risks that ESG brings?

If these cannot be answered “yes” with certainty, now is the time for proactive change; before it’s too late.

See also the other ESG article on Risk Management in this edition.


[Bell20] Bell, C.L. (2020, June 3). U.S. Department of Justice Revises its Guidance on Evaluating Corporate Compliance Programs. GreenbergTraurig E2 Law Blog. Retrieved from:

[Delf22] van Delft, M., Hoffman, C., Verhaar, E., & Pieroen, P. (2022). Mastering the ESG Reporting and Data Challenges. Compact, 2022(1). Retrieved from:

[Doct21] DocTract (2021, December 13). Why ESG Demands a Strong Policy Framework. Retrieved from:

[KPMG20] KPMG China (2020). Integrating ESG into your business. A step-by-step ESG guide for Hong Kong-listed issuers. Retrieved from:

[Link22] LinkLaters (2022). ESG Outlook in the Netherlands. Retrieved from:

[Nave21] Navex Global (N.D.). Definitive Guide to Policy & Procedure Management, second edition. Retrieved from:

[Roge19] Rogers, J. & Richardson, S. (2019, December). ESG investing: The sharpening teeth of disclosure. How to stay ahead of the curve, minimize future costs of compliance and feed the growing demand from investors for responsible products and services. White & Case Financial Regulatory Observer. Retrieved from:

[UNPR18] United Nations Principles for Responsible Investment (2018). PRI Reporting Framework Main Definitions. Retrieved from:

[Zhig22] Zhigalov, A. & de Graaff, G. (2022). Emerging Global and European Sustainability Reporting Requirements. Navigating the complexity and getting ready. Compact, 2022(1). Retrieved from:

Continuous control monitoring: the trend and how to get on board

How does the market think?

In a survey about Governance, Risk and Compliance ([KPMG19]), 57% of participants stated that only 10% of their internal control framework consisted of automated controls. However, 72% of participants identified control automation as a top priority. During the International SAP Conference on Internal Controls, Compliance and Risk Management in 2021 ([TAC22]), participants were asked several questions related to internal controls and their automation.

Figure 1 shows that 57% of the respondents would like to improve automated testing of their internal controls; 50% of respondents indicated that automated control testing and risk monitoring would be the highest priority on their GRC digitalization roadmap. However, 56% of respondents also stated that there are no technologies leveraged (yet) to automate their control testing.


Figure 1. Poll results from the International SAP Conference on Internal Controls, Compliance and Risk Management ([TAC22]). [Click on the image for a larger image]

Why automation, and what can we automate?

Organizations or representatives are aiming to automate testing of controls, but why? Because automation of controls will lead to increased assurance while spending less effort on manually performing or testing the control. This is also described with practical examples in [Klei16]. In this article the cost savings, assurance increase, and quality increase were calculated for an example control (possible duplicate vendor invoices). Once the control testing is automated, the frequency of testing can be increased and become continuous. When the automated testing or monitoring of these controls indeed becomes continuous, there are additional benefits. A publication from The Institute of Internal Auditors ([Code05]) states about continuous auditing: “The power of continuous auditing lies in the intelligent and efficient continuous testing of controls and risks that results in timely notification of gaps and weaknesses to allow immediate follow-up and remediation.” While continuous monitoring or testing are more the responsibility of the 2nd line of defense function and continuous auditing lies with the 3rd line of defense, the statement can apply to both. Continuous monitoring or testing will lead to timely notification of gaps and weaknesses and enables immediate follow up and remediation.”

The similarities and differences between continuous auditing and continuous monitoring are shown in Table 1.


Table 1. Continuous auditing versus continuous monitoring. [Click on the image for a larger image]

In summary, continuous automated testing or monitoring of controls is interesting for organizations as it is cost efficient, has a high level of reliability and allows for timely notifications and follow-up.

While the testing or monitoring of almost any control can be automated to some extent through periodic data analytics, robotics, small scripts in Python or even through macros in Excel, [Gies20] describes that it is easiest to do this for configuration and authorization controls, which are automated in nature as they are programmed or configured directly in the application. IT dependent controls (e.g. controls based on a report) have slightly less potential for automation followed by completely manual controls for which automation is less straightforward or in case of a procedural control (e.g. both CFO and CEO need to physically sign a document while in the same room).

While both continuous auditing and continuous monitoring are relevant and interesting topics, the remainder of this article will focus more on the continuous monitoring capabilities of selected tooling.

Systems and tools for automation

There are different systems and tools that have capabilities for continuous control monitoring. Some examples are MetricStream, SAI360, ServiceNow and SAP. Some might even say that with Robotic Process Automation (RPA) and low-code platforms, these capabilities can also be met. While this is probably theoretically correct, the costs for setting up and maintaining such RPA or low-code solutions are not always considered in the business case. Examples could be the costs of developing an RPA, this often requires a specialized developer or team to gather requirements, develop, test and deploy the robot. If the process changes after the RPA solution is live, the robot needs to be adjusted accordingly, which again takes time from the specialized development team. Other tools, such as GRC tools, are often owned by the internal control function and usually require less effort from IT or specialized teams.

With organizations that are using SAP as their main ERP or financial system, often an SAP solution for continuous monitoring is used. Nowadays, SAP offers two solutions which can be leveraged for automated testing of controls and continuously monitoring thereof: SAP Process control (part of SAP GRC) and SAP Financial Compliance Management.

SAP Process Control

SAP Process Control is part of the SAP GRC application. It offers, amongst others, capabilities to document controls, send out workflows for control assessment and testing, reporting and automated control monitoring. A detailed overview of this system is provided in [Kimb17]. In this article the focus will be on the automated control monitoring capabilities of SAP Process Control. SAP offers multiple different integration scenarios for control monitoring – as highlighted in Figure 2.


Figure 2. Integration scenarios in SAP Process Control. [Click on the image for a larger image]

While there are ten possible scenarios, the four scenarios highlighted in green in Figure 2 are most commonly used. These are further explained in Table 2.


Table 2. Commonly used integration Scenarios in SAP Process Control explained. [Click on the image for a larger image]

Once the integration with the target SAP ECC or SAP S/4 system is done, Data Sources (which is essentially a table, view or set of tables and views) and Business rules (a rule that determines which records are “right” and “wrong” in the retrieved data source) can be set up in SAP Process Control to determine whether the automated control in the target system is correctly or incorrectly configured. If the control is correctly configured, the SAP Process Control business rule will provide a “passed” result and the control is automatically reported as effective in SAP Process Control. However, if the control is not correctly configured, SAP Process Control will automatically create an issue workflow and send it, accompanied by the results of the business rule, to the responsible person for the control for further follow up. An example of such a workflow task in SAP Process Control is shown in Figure 3.


Figure 3. SAP Process Control Automated Monitoring workflow. [Click on the image for a larger image]

On top of this check, SAP Process Control also offers a change log check functionality. This functionality can read and analyze the full change history of a table (e.g. configuration table for 3-way-match control) if the table is flagged for change logging. By combining the “regular” configuration check and the change log check in SAP Process Control, a 100% coverage can be achieved, meaning that the configuration settings of a target SAP system are completely and continuously monitored.

SAP Financial Compliance Management

SAP Financial Compliance Management is a relatively new solution from SAP. The aim of SAP with Financial Compliance Management is to provide a system that can be used to comply with SOx, with a low total cost of ownerships that can leverage a set of existing, pre-defined monitoring content.

As part of SAP Financial Compliance Management, SAP currently provides 60 Core Data Services (CDS) views in SAP S/4 which can be leveraged. These 60 CDS are provided out-of-the box. It is also possible to create additional CDS views which can be read by SAP Financial Compliance Management.

The CDS views are read using so called “Automated procedures” in SAP Financial Compliance Management. These procedures are run to determine whether a control linked to the procedure is effective or ineffective. If the result of a procedure is ineffective, an issue is created for follow up by the responsible user. An example of such a workflow task in SAP Financial Compliance Management is shown in Figure 4.


Figure 4. SAP Financial Compliance Management procedure results. [Click on the image for a larger image]

SAP Process Control and SAP Financial Compliance Management side by side

Both solutions from SAP can be used for continuous control monitoring of automated controls in SAP target systems. While they are largely similar, there are also some differences. Table 3 shows a comparison.


Table 3. Comparison between SAP Process Control and SAP Financial Compliance Management. [Click on the image for a larger image]

While SAP Process Control has been around for several years, contains a broad range of functionalities and could be considered more heavy-duty, SAP Financial Compliance Management a new solution from SAP, more positioned as a quick and easy introduction to control automation and SOx compliance. Both solutions provide the tools that are needed to perform continuous control monitoring.

Looking at the roadmap for the remainder of 2022, there is a clear focus on the further development of SAP Financial Compliance Management, with seven planned activities. For SAP Process Control, there is only one development planned on the roadmap. On one hand, this might mean SAP Process Control is a stable solution, as it has been around for many years. On the other hand, it also shows the ambition to enhance the new SAP Financial Compliance Management system. Both systems are, and remain, compatible with the SAP S/4 system. This provides customers with a choice and the opportunity to really assess what is the best solution for their requirements.


Control automation and continuous control monitoring are still trending topics in the market. There are different applications and tools that provide functionality for continuous control monitoring. The applications delivered by SAP – SAP Process Control and SAP Financial Compliance Management – have their differences, but both deliver the functionalities needed to make the next step in the continuous control monitoring efforts of the internal control or internal audit function.


[Code05] Coderre, D. (2005). Global Technology Audit Guide: Continuous Auditing: Implications for Assurance, Monitoring, and Risk Assessment. The Institute of Internal Auditors. Retrieved from:

[Gies20] van der Giesen, S. & Speelman, V. (2020). Exploring digital: Empowering the internal control function. Compact, 2020(3). Retrieved from:

[Kimb17] Kimball, D.A. & van der Giesen, S. (2017). A practical view on SAP Process Control. Compact, 2017(4). Retrieved from:

[Klei16] Klein Tank, K. & van Hillo, R. (2016). It’s time to embrace continuous monitoring. Compact, 2016(4). Retrieved from:

[KPMG19] KPMG (2019, May). Survey – Governance, Risk and Compliance. Retrieved from:

[TAC22] TAC Events (2022, March). Poll results – International SAP Conference on Internal Controls, Compliance and Risk Management 2021. Retrieved from:

Incorporating ESG in risk management


As a Risk & Controls professional, you sometimes find yourself in the following situation, “You just finished the year-end in-control statement and celebrated another successful end-of-year cycle with your team. You received an email from the CFO asking: “Do we have an internal controls framework for ESG reporting?” You are familiar with the term ESG. In fact, you just bought an electric vehicle to show your personal commitment to this topic. However, the internal controls framework for ESG reporting is completely new to you and you don’t even know where to begin.

Following this scenario, questions that naturally surface are: “What is the information required to report and how do I ensure completeness, accuracy and compliance of such information being reported? Are there appropriate internal controls in place within different processes to ensure transparency, accuracy and consistency of the data being disclosed and reported? How do I assess whether I am doing enough to comply with the regulatory requirements in its true essence and not make it a box-ticking exercise? How does my role in this journey differ from what the sustainability department is responsible for?”

If the questions above sound familiar to you at all, you are not alone. ESG and ESG reporting have moved out of the office of the Chief Sustainability Officer (CSO) into the purview of the CFO for many organizations, as it is slowly becoming the focal point and climbing its way up to the top agendas of the boardroom and C-suite discussions. Regulators across the globe have been driving the inclusion of ESG in reporting which can be found in [Zhig22].

Understanding the need of the hour, we suggest some simple, albeit not easy, steps for you to consider commencing the ESG Reporting Journey.

The ESG Reporting Journey

There are some considerations ([Schm22]) to be kept in mind by a Risk & Controls professional like yourself while starting and continuing on this journey:

  • Define the strategy for the risk function. The ESG risk profile should be underpinned with risk appetite statements, a robust framework and taxonomy as well as clear metrics to allow the management to monitor the amount of risk it is willing to accept in pursuit of the organizational objectives.
    For instance, consider a statement: “We have a low risk appetite for non-compliance of ESG reporting regulations either out of ignorance or willfulness; therefore, we focus on education, training, awareness and accountability of actions and disclosures.”
  • Self-assessment of skills and capabilities. Ensure your risk function is credible and well-positioned to add to the dialogue concerning strategic change. This implies a need for action on several fronts, such as hiring, training and career development of the right talent who has the competency of identifying risks pertaining to ESG and putting an internal controls framework place. The risk function should stay up to date with all the regulatory changes in the ESG space like the introduction of EU Taxonomy, proposed reporting requirements by SEC and being quick to analyze the impact of non-compliance on the reputation of the organization. Risk professionals should also possess the ability to assess the robustness of the existing processes and controls for instance for being able to assess the HR department to see how the employee related numbers (to be disclosed) are collected and if the controls are appropriate for complete and accurate reporting.
  • Define roles and responsibilities. Define and agree the role of the risk function within the business planning cycle – set out chronologically and map check points for risk management-facilitated discussions on key strategic initiatives. ESG internal control specialists should be allocated the responsibility to perform risk assessments and double materiality assessments. Additionally, the risk function should play a role in defining the organizations’ policy and procedures for ESG-related disclosure risks and controls.
  • Enhance risk management technologies. Make better use of available technologies, visualization tools and dashboarding to support senior management decisions on strategic risk. Invest in emerging risks, horizon scanning and stress testing capabilities to support better conversations on long-term implications of strategic decisions.
    For example, KPMG’s Sofy platform is often used for ESG regulations compliance tracking, carbon emissions monitoring, providing assurance over supporting data collection & analytics, ESG project impact tracking and performing maturity assessments.

Evolving your risk function towards the future ambition of the organization can be a complex undertaking. The following key steps are the core for a successful transformation:

  1. Look at establishing a governance structure with clear roles and responsibilities. The organization should set up adequate sustainability governance with clear roles and responsibilities in order to define policies, oversee the end-to-end ESG process from the definition of strategy through to the disclosures being made, and ensure there are appropriate controls throughout the process.
    In conjunction with management, it is important to understand the ESG topics of investor focus. You should focus on gathering existing documentation (e.g. baseline data, reporting strategy documents, output of process reviews) and review existing stakeholder materiality assessments, ERM results, internal board presentations, and analyst reports.
  2. Assess the as-is state for ESG reporting within the organization. While you start assessing the as-is state, give some thought to the below questions/points for a holistic overview:
    1. Is the ESG theme part of your organization’s values? Is the S(ocial) element included in the ethics & integrity employee training sessions?
    2. Is there sufficient knowledge of the G(overnance) aspects amongst oversight bodies to enable them to carry out their role appropriately?
    3. Are there clear well-established reporting lines, authorities and responsibilities for the E(nvironment) theme activities which also enable the organization to hold people accountable for their actions like waste disposal, carbon emissions, energy efficiency?
    4. How can you include fraud risks into ESG risk assessment activity to avoid greenwashing activities?
    5. Select and develop entity level governance controls like development of policies and procedures for ESG reporting. Develop process level controls for ESG disclosure activities like reporting of numbers under the gender and diversity KPI, number of accidents, along with technology driven controls for the IT systems used to generate the quantifiable figures.
    6. Can you already leverage on the existing lines of information and communication to use and communicate control information with respect to internal and external ESG reporting?
    7. Is there relevant and sufficient capability with your function to perform ESG risk assessments and regular evaluation of the designed ESG reporting internal controls framework?

    As a Risk & Controls professional, start by assessing the maturity of the internal controls framework for the relevant ESG metrics and prepare a list of gaps coming out to review that would need to be remediated to reach the end state. Focus on the Responsible, Accountable, Consulted, and Informed (RACI) Matrix for appropriate allocation of jobs across the organization. Also perform the data readiness assessment to understand how efficiently the data can be used for disclosures and what remediations would be required on the way.
    For instance, for reporting Green House Gas (GHG) emissions under Scope 1 and 21 – assess the process of collection of data and calculation of the numbers that would be required to be reported. Assess the key risks and validate which controls are present or would be required in the process to mitigate the key risks.

  3. Design the internal controls framework for relevant ESG metrics. Based on the new governance structure and the as-is assessment, a new ESG internal controls framework including process, controls, reporting, technology, and data improvement recommendations for a future state Target Operating Model (TOM) should be prepared. Also include a Change Management and transformation plan for an efficient implementation process. For example:
    1. At an entity level, the risk function should design management controls for regular materiality assessments to monitor sustainability goals. Additionally, they should also consider cut-off procedures to ensure data is presented and calculated for correct period.
    2. Another operational example – for reporting of GHG emissions under Scope 1 and 2 – the internal controls will have to be designed at a process level to ensure:
      • Completeness and accuracy of source data being used for calculation of GHG emissions in the organization
      • Complete and accurate calculation of GHG emissions
      • Transparency, consistency and relevance of GHG emissions data
  4. Implementation of the internal controls framework. Plug in the gaps identified and extend support in executing the designed ESG reporting program and controls. This would include introducing some system implementations, training of the staff on the job and deployment of the roadmap towards ESG reporting.
  5. Sustain the framework. This new framework must be tested by your team over time and require some overhauling as and when there are some changes in the ESG metrics as per materiality assessment. By having an appropriate internal status reporting including the testing results, data can be modified in a timely and complete manner and accurate reporting targets can be achieved.


Figure 1. Your road to reporting in the ESG journey. [Click on the image for a larger image]


Risk & Controls professionals can help organizations establish a long-term vision rather than managing short-term risks. This presents a unique opportunity for the risk professionals to take an eminent role and drive the transformation within the organization towards a better future.

After carefully considering these five steps and your company’s current situation, you can confidently respond to your CFO and say that “No, we currently do not have an internal controls framework for ESG reporting, but I know what to do. I will arrange a meeting to get started.”

See also the other ESG article on Risk Management in this edition.


  1. Scope 1 emissions: direct emissions from owned or controlled sources; Scope 2 emissions: indirect emissions from purchased energy; and Scope 3 emissions: indirect emissions, other than the ones under Scope 2, that occur in the value chain of an organization.


[Schm22] Schmucki, P. (2022, February 1). ESG and the evolving risk management function. KPMG Switzerland Blog. Retrieved from:

[Zhig22] Zhigalov, A. & de Graaff, G. (2022). Emerging global and European sustainability reporting requirements. Compact, 2022(1). Retrieved from:

Emerging global and European sustainability reporting requirements

This article looks at new developments in sustainability reporting on a global and European level. A global multi-stakeholder acknowledgment for coherence and consistency in sustainability reporting is desired. Major standard setters are collaborating and prototyping what later can become a unified solution. In this paper we share what we know about the proposal of EU CSRD, EU Taxonomy and IFRS ISSB and try to indicate in what way companies should be ready for new global and European developments.


Regardless of regulation and domicile, companies – both public and private – are under pressure from regulators, investors, lenders, customers and others to improve their sustainability credentials and related reporting. Companies often report using multiple standards, metrics or frameworks with limited effectiveness and impact, a high risk of complexity and ever-increasing cost. It, moreover, can be daunting to keep track of the everchanging reporting frameworks and new regulations.

As a result, there is a global demand for major stakeholders involved in sustainability reporting standard setting collectively coming up with a set of comparable and consistent standards ([IFR20]). This would allow companies to ease reporting fatigue and prepare for compliance with transparent and common requirements. Greater consistency would reduce complexity and help build public trust through greater transparency of corporate sustainability reporting. Investors, in turn, would benefit from increased comparability of reported information.

However, the demand for global coherence and greater consistency in sustainability reporting is yet to be met. This paper provides an overview of the current state of affairs and highlights the most prominent collaborative attempts to set standards, through the IFRS Foundation Sustainability Standards Board, EU Corporate Sustainability Reporting Directive and EU Taxonomy.

Global sustainability reporting developments: IFRS International Sustainability Standards Board (ISSB) in focus

The new International Sustainability Standards Board (ISSB) aims to develop sustainability disclosure standards that are focused on enterprise value. The goal is to stimulate globally consistent, comparable and reliable sustainability reporting using a building block approach. With strong support from The International Organization of Securities Commissions (IOSCO), a rapid route to adoption is expected in a number of jurisdictions. In some jurisdictions, the standards will provide a baseline either to influence or to be incorporated into local requirements. Others are likely to adopt the standards in their entirety. Companies need to monitor their jurisdictions’ response to the standards issued by the ISSB and prepare for their implementation.

There is considerable investor support behind the ISSB initiative, and the Glasgow Financial Alliance for net Zero (GFANZ) announced at COP26 that over $130 trillion of private capital is committed to transforming the global economy towards net zero ([GFAN21]). Investors expect the ISSB to bring the same focus, comparability and rigor to sustainability reporting as the International Accounting Standards Board (IASB Board) has done for financial reporting. This could mean that public and private organizations will adopt the standards in response to investor or social pressure.

ISSB has provided prototype standards on climate-related disclosures and general requirements for sustainability disclosures, which are based on existing frameworks and standards, including Task Force on Climate-Related Financial Disclosures (TCFD) and Sustainability Accounting Standards Board (SASB). As for now the prototype standards have been released for discussion purposes only. The prototypes cover climate-related disclosures and general requirements for disclosures that should form the basis for future standard setting on other sustainability matters.


Figure 1. What contributes to the ISSB and IFRS Sustainability Disclosure Standards. [Click on the image for a larger image]

The prototypes are based on the latest insight into existing frameworks and standards. They follow the four pillars of the TCFD’s recommended disclosures: governance, strategy, risk management, metrics and targets. Enhanced by climate-related industry-specific metrics derived from the SASB’s 77 industry-specific standards. Additionally, the prototypes embrace input from other frameworks and stakeholders, including input from the IASB Board’s management commentary proposals. The ISSB builds prototypes using a similar approach to IFRS Accounting Standards. The general disclosure requirements prototype was inspired by IAS 1 Presentation of Financial Statements, setting out the overall requirements for presentation under IFRS Accounting Standards.

Companies that previously adopted TCFD should consider identifying and presenting information on topics other than climate and focus on sector-specific metrics, while those companies that previously adopted SASB should focus on strategic and process-related requirements related to governance, strategy and risk management.


Figure 2. How Sustainability Disclosure Standards are supposed to look. [Click on the image for a larger image]

The prototypes shed light on the proposed disclosure requirements. Material information should be disclosed across presentation standard, thematic and industry-specific standards. Material information is supposed to:

  1. provide a complete and balanced explanation of significant sustainability risks and opportunities;
  2. cover governance, strategy, risk management and metrics and targets;
  3. focus on the needs of investors and creditors, and drivers of enterprise value;
  4. be consistent, comparable and connected;
  5. be relevant to the sector and the industry;
  6. be present across time horizons: short-, medium- and long-term.

Material metrics should be based on measurement requirements in the climate prototype or other frameworks such as the Greenhouse Gas Protocol.

The climate prototype has a prominent reference to scenario analysis. Such analysis can help investors assess the possible exposures from a range of hypothetical circumstances and can be a helpful tool for company’s management in assessing the resilience of a company’s business model and strategy to climate-related risks.

What is scenario analysis?

Scenario analysis is a structured way to consider how climate-related risks and opportunities could impact a company’s governance framework, business model and strategy. Scenario analysis is used to answer ‘what if’ questions. It does not aim to forecast of predict what will happen.

A climate scenario is a set of assumptions on how the world will react to different degrees of global warming. For example, the carbon prices and other factors needed to limit global warming to 1.5 °C. By their nature, scenarios may be different from the assumptions underlying the financial statements. However, careful consideration needs to be given to the extent in which linkage between the scenario analysis and these assumptions is appropriate.

The prototypes do not specify a single location where the information should be disclosed. The prototypes allow for cross referencing to information presented elsewhere, but only if it is released at the same time as the general-purpose financial report. For example, the MD&A (management discussion & analysis) or management commentary may be the most appropriate place to provide information required by future ISSB standards.


Figure 3. Examples of potential places for ISSB-standards disclosure. [Click on the image for a larger image]

As for an audit of such disclosure, audit requirements are not within the ISSB’s remit. Regardless of local assurance requirements, companies will need to ensure they have the processes and controls in place to produce robust and timely information. Regulators may choose to require assurance when adopting the standards.

How the policy context of the EU shapes the reporting requirements

In line with the Sustainable Finance Action Plan of the European Commission, the EU has taken a number of measures to ensure that the financial sector plays a significant part in achieving the objectives of the European Green Deal ([EUR18]). The European policy maker states that better data from companies about the sustainability risks they are exposed to, and their own impact on people and the environment, is essential for the successful implementation of the European Green Deal and the Sustainable Finance Action Plan.


Figure 4. The interplay of EU sustainable finance regulations. [Click on the image for a larger image]

The following trends build up a greater demand for transparency and uptake of corporate sustainability information in investment decision making:

  1. Increased awareness that climate change will have severe consequences when not actively addressed
  2. Social stability requires more focus on equal treatment of people, including a more equal distribution of income and financial capital
  3. Allocating capital to companies with successful long-term value creation requires more comprehensive insights in non-financial value factors
  4. Recognition that large corporate institutions have a much broader role than primarily serving shareholders

The European Commission as a policy maker addresses these trends through comprehensive legislation focusing on directly addressing issues as well as indirectly addressing issues through corporate disclosures to support investors decision making.

In terms of the interplay between the European and global standard setters, it is interesting to notice that collaboration is highly endorsed. The EU Commission clearly states that EU sustainability reporting standards need to be globally aligned and aims to incorporate the essential elements of globally accepted standards currently being developed. The proposals of the International Financial Reporting Standards (IFRS) Foundation to create a new Sustainability Standards Board are called relevant in this context ([EUR21d]).

Proposal for a Corporate Sustainability Reporting Directive

On April 21, 2021, the EU Commission announced the adoption of the Corporate Sustainability Reporting Directive (CSRD) in line with the commitment made under the European Green Deal. The CSRD will amend the existing Non-Financial Reporting Directive (NFRD) and will substantially increase the reporting requirements on the companies falling within its scope in order to expand the sustainability information for users.


Figure 5. European sustainability reporting standards timeline. [Click on the image for a larger image]

The proposed directive will entail a significant increase in the number of companies subject to the EU sustainability reporting requirements. The NFRD currently in place for reporting on sustainability information, covers approximately 11,700 companies and groups across the EU. The CSRD is expected to increase the number of firms subject to EU sustainability reporting requirements to approximately 49,000. Small and medium listed companies get an extra 3 years to comply. Criteria to define the applicability of CSRD to companies (listed or non-listed) make a list of three. At least two of three should be met. The criteria are:

  • more than 250 employees and/or;
  • EUR 40 mln turnover and/or;
  • EUR 20 mln assets.

New developments will come with significant changes and potential challenges for companies in scope. The proposed Directive has additional requirements that will affect the sustainability reporting of those affected ([EUR21a]):

  1. The Directive aims to clarify the principle of double materiality and to remove any ambiguity about the fact that companies should report information necessary to understand how sustainability matters affect them, and information necessary to understand the impact they have on people and the environment.
  2. The Directive introduces new requirements for companies to provide information about their strategy, targets, the role of the board and management, the principal adverse impacts connected to the company and its value chain, intangibles, and how they have identified the information they report.
  3. The Directive specifies that companies should report qualitative and quantitative as well as forward-looking and retrospective information, and information that covers short-, medium- and long-term time horizons as appropriate.
  4. The Directive removes the possibility for Member States to allow companies to report the required information in a separate report that is not part of the management report.
  5. The Directive requires exempted subsidiary companies to publish the consolidated management report of the parent company reporting at group level, and to include a reference in its legal-entity (individual) management report to the fact that the company in question is exempted from the requirements of the Directive.
  6. The Directive requires companies in scope to prepare their financial statements and their management report in XHTML format and to mark-up sustainability information.


Figure 6. Nature of double materiality concept. [Click on the image for a larger image]

The CSRD has overall requirements on how to report, general disclosure requirements on how the company has organized and managed itself and topic specific disclosure requirements in the field of sustainability. It should be noted that the company sustainability reporting requirements are much broader than climate risk, e.g., environmental, social, governance and diversity are the topics addressed by the CSRD.


Figure 7. Overview of the reporting requirements of the CSRD. [Click on the image for a larger image]

Extended reporting requirements that come with the CSRD may require companies in scope of this regulation to start preparing now. Here is an illustrative timeline for companies to become CSRD ready.


Figure 8. A potential way forward to become CSRD ready. [Click on the image for a larger image]

EU Taxonomy – new financial language for corporates

The EU Taxonomy and the delegated regulation are the first formal steps of the EU to require sustainability reporting in an effort to achieve the green objectives.

Over the financial year 2021, so called large (more than 500 employees) listed entities have to disclose, in their non-financial statement as part of the management report, how their turnover, CapEx and OpEx are split by Taxonomy-eligible activities (%) and Taxonomy-non-eligible activities (%) including further qualitative information.

Over the financial year 2022 these activities need to be aligned with the criteria for sustainability to contribute to the environmental objectives and do no significant harm to other objectives and comply with minimum safeguards. Alignment should then be reported as proportion of turnover, CapEx and OpEx to assets or processes associated with economic activities that qualify as environmentally sustainable. To financial institutions in turn it translates to the requirement to report on the green asset ratio, which in principle is a ratio of Taxonomy-eligible or Taxonomy-aligned assets as a percentage of total assets.


Figure 9. EU Taxonomy timeline. [Click on the image for a larger image]

The “delegated act” under the Taxonomy Regulation sets out the technical screening criteria for economic activities that can make a “substantial contribution” to climate change mitigation and climate change adaptation. In order to gain political agreement at this stage texts relating to crops and livestock production were deleted, and those relating to electricity generation from gaseous and liquid fuels only relate to renewable, non-fossil sources. On the other hand, texts on the manufacture of batteries and plastics in primary form have been added, and the sections on information and communications technology, and professional, scientific and technical activities have been augmented.

With further updates of the technical screening criteria for the environmental objective of climate mitigation we will also see the development of the technical screening criteria for transitional activities. Those transitional economic activities should qualify as contributing substantially to climate change mitigation if their greenhouse gas emissions are substantially lower than the sector or industry average, if they do not hamper the development and deployment of low-carbon alternatives and if they do not lead to a lock-in of assets incompatible with the objective of climate- neutrality, considering the economic lifetime of those assets.

Moreover, those economic activities that qualify as contributing substantially to one or more of the environmental objectives by directly enabling other activities to make a substantial contribution to one or more of those objectives are to be reported as enabling activities.

The EU Commission estimates that the first delegated act covers the economic activities of about 40% of EU-domiciled listed companies, in sectors which are responsible for almost 80% of direct greenhouse gas emissions in Europe. A complementary delegated act, expected later in early 2022, will include criteria for the agricultural and energy sector activities that were excluded this time around. The four remaining environmental objectives — sustainable use of water and marine resources, transition to a circular economy, pollution prevention and control, and protection and restoration of biodiversity and ecosystems — will be addressed in a further delegated act scheduled for Q1 of this year.


Figure 10. EU Taxonomy conceptual illustration. [Click on the image for a larger image]

Companies shall disclose the proportion of environmentally sustainable economic activities that align with the EU Taxonomy criteria. The European ([EUR21c]) Commission views that the translation of environmental performance into financial variables (turnover, CapEx and OpEx KPIs) allows investors and financial institutions in turn to have clear and comparable data to help them with their investment and financing decisions. The main KPIs for non-financial companies include the following:

  • The turnover KPI represents the proportion of the net turnover derived from products or services that are Taxonomy aligned. The turnover KPI gives a static view of the companies’ contribution to environmental goals.
  • The CapEx KPI represents the proportion of the capital expenditure of an activity that is either already Taxonomy aligned or part of a credible plan to extend or reach Taxonomy alignment. CapEx provides a dynamic and forward-looking view of companies’ plans to transform their business activities.
  • The OpEx KPI represents the proportion of the operating expenditure associated with Taxonomy-aligned activities or to the CapEx plan. The operating expenditure covers direct non-capitalized costs relating to research and development, renovation measures, short-term lease, maintenance and other direct expenditures relating to the day-to-day servicing of assets of property, plant and equipment that are necessary to ensure the continued and effective use of such assets.

The plan that accompanies both the CapEx and OpEx KPIs shall be disclosed at the economic activity aggregated level and meet the following conditions:

  • It shall aim to extend the scope of Taxonomy-aligned economic activities or it shall aim for economic activities to become Taxonomy aligned within a period of maximum 10 years.
  • It shall be approved by the management board of non-financial undertakings or another body to which this task has been delegated.

In addition, non-financial companies should provide for a breakdown of the KPIs based on the economic activity pursued, including transitional and enabling activities, and the environmental objective reached.


Figure 11. EU Taxonomy disclosure requirements. [Click on the image for a larger image]

As for challenges companies face when preparing for EU Taxonomy disclosure, the following key implementation challenges are observed in our practice:

  1. administrative burden and systems readiness;
  2. alignment with other reporting frameworks and regulations;
  3. data availability;
  4. definitions alignment across all forms of management reporting;
  5. integration of EU Taxonomy reporting into strategic decision making.

Furthermore, the Platform on Sustainable Finance is consulting ([EUR21b]) on extending the Taxonomy to cover “brown” activities and a new Social Taxonomy. The current Taxonomy covers only things that are definitely “green”, indicating a binary classification. The Platform notes the importance of encouraging non-green activities to transition and suggests two new concepts – “significantly harmful” and “no significant harm”. The aim of a Social Taxonomy would be to identify economic activities that contribute to advancing social objectives. A follow-up report by the Commission is expected soon. The eventual outcome will be a mandatory social dictionary, which will add further to the corporate reporting requirements mentioned above and company processes and company-level and product disclosures for the buy-side (see below). It will also be the basis for a Social Bond Standard.


Evolution of sustainability reporting is happening at a fast pace. Collective efforts on a global and European level help develop the disclosure requirements to make them more coherent and consistent in order to be comparable and reliable. The prototype standards that have been released by now show optimism about leveraging on the existing reporting frameworks for the sake of consistency. Luckily the European and global standard setters prioritized recycling and leveraging on existing reporting frameworks and guidance instead of designing something absolutely new to the wider audience. Sustainability reporting standardization is after all not only much waited activity but also very much a dynamic and multi-centred challenge. All, EU CSRD, EU Taxonomy and IFRS ISSB will ultimately contribute to the availability of high-quality information about sustainability risks and opportunities, including the impact companies have on people and the environment. This in turn will improve the allocation of financial capital to companies and activities that address social, health and environmental problems and ultimately build trust between those companies and society. This is a pivotal moment for corporate sustainability reporting; more updates on the developments will follow most certainly!

Read more on this subject in “Mastering the ESG reporting and data challenges“.


[EUR18] European Commission (2018). Communication from the Commission. Action Plan: Financing Sustainable growth. Retrieved from:

[EUR21a] European Commission (2021). Proposal for a Directive of the European Parliament and of the Council amending Directive 2013/34/EU, Directive 2004/109/EC, Directive 2006/43/EC and Regulation (EU) No537/2014, as regards corporate sustainability reporting

[EUR21b] European Commission (2021). Call for feedback on the draft reports by the Platform on Sustainable Finance on a Social Taxonomy and on an extended Taxonomy to support economic transition. Retrieved from:

[EUR21c] European Commission (2021). FAQ: What is the EU Taxonomy Article 8 delegated act and how will it work in practice? Retrieved from:

[EUR21d] European Commission (2021). Questions and Answers: Corporate Sustainability Reporting Directive proposal. Retrieved from:

[GFAN21] GFANZ, Glasgow Financial Alliance for Net Zero (2021). Amount of finance committed to achieving 1.5º C now at scale needed to deliver the transition. Retrieved from:

[IFR20] IFRS Foundation (2020). Consultation Paper on Sustainability Reporting. Retrieved from:

[IOSC21] IOSCO (2021). Media Release IOSCO/MR/16/2021. Retrieved from: