Skip to main content

Emerging global and European sustainability reporting requirements

This article looks at new developments in sustainability reporting on a global and European level. A global multi-stakeholder acknowledgment for coherence and consistency in sustainability reporting is desired. Major standard setters are collaborating and prototyping what later can become a unified solution. In this paper we share what we know about the proposal of EU CSRD, EU Taxonomy and IFRS ISSB and try to indicate in what way companies should be ready for new global and European developments.

Introduction

Regardless of regulation and domicile, companies – both public and private – are under pressure from regulators, investors, lenders, customers and others to improve their sustainability credentials and related reporting. Companies often report using multiple standards, metrics or frameworks with limited effectiveness and impact, a high risk of complexity and ever-increasing cost. It, moreover, can be daunting to keep track of the everchanging reporting frameworks and new regulations.

As a result, there is a global demand for major stakeholders involved in sustainability reporting standard setting collectively coming up with a set of comparable and consistent standards ([IFR20]). This would allow companies to ease reporting fatigue and prepare for compliance with transparent and common requirements. Greater consistency would reduce complexity and help build public trust through greater transparency of corporate sustainability reporting. Investors, in turn, would benefit from increased comparability of reported information.

However, the demand for global coherence and greater consistency in sustainability reporting is yet to be met. This paper provides an overview of the current state of affairs and highlights the most prominent collaborative attempts to set standards, through the IFRS Foundation Sustainability Standards Board, EU Corporate Sustainability Reporting Directive and EU Taxonomy.

Global sustainability reporting developments: IFRS International Sustainability Standards Board (ISSB) in focus

The new International Sustainability Standards Board (ISSB) aims to develop sustainability disclosure standards that are focused on enterprise value. The goal is to stimulate globally consistent, comparable and reliable sustainability reporting using a building block approach. With strong support from The International Organization of Securities Commissions (IOSCO), a rapid route to adoption is expected in a number of jurisdictions. In some jurisdictions, the standards will provide a baseline either to influence or to be incorporated into local requirements. Others are likely to adopt the standards in their entirety. Companies need to monitor their jurisdictions’ response to the standards issued by the ISSB and prepare for their implementation.

There is considerable investor support behind the ISSB initiative, and the Glasgow Financial Alliance for net Zero (GFANZ) announced at COP26 that over $130 trillion of private capital is committed to transforming the global economy towards net zero ([GFAN21]). Investors expect the ISSB to bring the same focus, comparability and rigor to sustainability reporting as the International Accounting Standards Board (IASB Board) has done for financial reporting. This could mean that public and private organizations will adopt the standards in response to investor or social pressure.

ISSB has provided prototype standards on climate-related disclosures and general requirements for sustainability disclosures, which are based on existing frameworks and standards, including Task Force on Climate-Related Financial Disclosures (TCFD) and Sustainability Accounting Standards Board (SASB). As for now the prototype standards have been released for discussion purposes only. The prototypes cover climate-related disclosures and general requirements for disclosures that should form the basis for future standard setting on other sustainability matters.

C-2022-1-Zhigalov-01-klein

Figure 1. What contributes to the ISSB and IFRS Sustainability Disclosure Standards. [Click on the image for a larger image]

The prototypes are based on the latest insight into existing frameworks and standards. They follow the four pillars of the TCFD’s recommended disclosures: governance, strategy, risk management, metrics and targets. Enhanced by climate-related industry-specific metrics derived from the SASB’s 77 industry-specific standards. Additionally, the prototypes embrace input from other frameworks and stakeholders, including input from the IASB Board’s management commentary proposals. The ISSB builds prototypes using a similar approach to IFRS Accounting Standards. The general disclosure requirements prototype was inspired by IAS 1 Presentation of Financial Statements, setting out the overall requirements for presentation under IFRS Accounting Standards.

Companies that previously adopted TCFD should consider identifying and presenting information on topics other than climate and focus on sector-specific metrics, while those companies that previously adopted SASB should focus on strategic and process-related requirements related to governance, strategy and risk management.

C-2022-1-Zhigalov-02-klein

Figure 2. How Sustainability Disclosure Standards are supposed to look. [Click on the image for a larger image]

The prototypes shed light on the proposed disclosure requirements. Material information should be disclosed across presentation standard, thematic and industry-specific standards. Material information is supposed to:

  1. provide a complete and balanced explanation of significant sustainability risks and opportunities;
  2. cover governance, strategy, risk management and metrics and targets;
  3. focus on the needs of investors and creditors, and drivers of enterprise value;
  4. be consistent, comparable and connected;
  5. be relevant to the sector and the industry;
  6. be present across time horizons: short-, medium- and long-term.

Material metrics should be based on measurement requirements in the climate prototype or other frameworks such as the Greenhouse Gas Protocol.

The climate prototype has a prominent reference to scenario analysis. Such analysis can help investors assess the possible exposures from a range of hypothetical circumstances and can be a helpful tool for company’s management in assessing the resilience of a company’s business model and strategy to climate-related risks.

What is scenario analysis?

Scenario analysis is a structured way to consider how climate-related risks and opportunities could impact a company’s governance framework, business model and strategy. Scenario analysis is used to answer ‘what if’ questions. It does not aim to forecast of predict what will happen.

A climate scenario is a set of assumptions on how the world will react to different degrees of global warming. For example, the carbon prices and other factors needed to limit global warming to 1.5 °C. By their nature, scenarios may be different from the assumptions underlying the financial statements. However, careful consideration needs to be given to the extent in which linkage between the scenario analysis and these assumptions is appropriate.

The prototypes do not specify a single location where the information should be disclosed. The prototypes allow for cross referencing to information presented elsewhere, but only if it is released at the same time as the general-purpose financial report. For example, the MD&A (management discussion & analysis) or management commentary may be the most appropriate place to provide information required by future ISSB standards.

C-2022-1-Zhigalov-03-klein

Figure 3. Examples of potential places for ISSB-standards disclosure. [Click on the image for a larger image]

As for an audit of such disclosure, audit requirements are not within the ISSB’s remit. Regardless of local assurance requirements, companies will need to ensure they have the processes and controls in place to produce robust and timely information. Regulators may choose to require assurance when adopting the standards.

How the policy context of the EU shapes the reporting requirements

In line with the Sustainable Finance Action Plan of the European Commission, the EU has taken a number of measures to ensure that the financial sector plays a significant part in achieving the objectives of the European Green Deal ([EUR18]). The European policy maker states that better data from companies about the sustainability risks they are exposed to, and their own impact on people and the environment, is essential for the successful implementation of the European Green Deal and the Sustainable Finance Action Plan.

C-2022-1-Zhigalov-04-klein

Figure 4. The interplay of EU sustainable finance regulations. [Click on the image for a larger image]

The following trends build up a greater demand for transparency and uptake of corporate sustainability information in investment decision making:

  1. Increased awareness that climate change will have severe consequences when not actively addressed
  2. Social stability requires more focus on equal treatment of people, including a more equal distribution of income and financial capital
  3. Allocating capital to companies with successful long-term value creation requires more comprehensive insights in non-financial value factors
  4. Recognition that large corporate institutions have a much broader role than primarily serving shareholders

The European Commission as a policy maker addresses these trends through comprehensive legislation focusing on directly addressing issues as well as indirectly addressing issues through corporate disclosures to support investors decision making.

In terms of the interplay between the European and global standard setters, it is interesting to notice that collaboration is highly endorsed. The EU Commission clearly states that EU sustainability reporting standards need to be globally aligned and aims to incorporate the essential elements of globally accepted standards currently being developed. The proposals of the International Financial Reporting Standards (IFRS) Foundation to create a new Sustainability Standards Board are called relevant in this context ([EUR21d]).

Proposal for a Corporate Sustainability Reporting Directive

On April 21, 2021, the EU Commission announced the adoption of the Corporate Sustainability Reporting Directive (CSRD) in line with the commitment made under the European Green Deal. The CSRD will amend the existing Non-Financial Reporting Directive (NFRD) and will substantially increase the reporting requirements on the companies falling within its scope in order to expand the sustainability information for users.

C-2022-1-Zhigalov-05-klein

Figure 5. European sustainability reporting standards timeline. [Click on the image for a larger image]

The proposed directive will entail a significant increase in the number of companies subject to the EU sustainability reporting requirements. The NFRD currently in place for reporting on sustainability information, covers approximately 11,700 companies and groups across the EU. The CSRD is expected to increase the number of firms subject to EU sustainability reporting requirements to approximately 49,000. Small and medium listed companies get an extra 3 years to comply. Criteria to define the applicability of CSRD to companies (listed or non-listed) make a list of three. At least two of three should be met. The criteria are:

  • more than 250 employees and/or;
  • EUR 40 mln turnover and/or;
  • EUR 20 mln assets.

New developments will come with significant changes and potential challenges for companies in scope. The proposed Directive has additional requirements that will affect the sustainability reporting of those affected ([EUR21a]):

  1. The Directive aims to clarify the principle of double materiality and to remove any ambiguity about the fact that companies should report information necessary to understand how sustainability matters affect them, and information necessary to understand the impact they have on people and the environment.
  2. The Directive introduces new requirements for companies to provide information about their strategy, targets, the role of the board and management, the principal adverse impacts connected to the company and its value chain, intangibles, and how they have identified the information they report.
  3. The Directive specifies that companies should report qualitative and quantitative as well as forward-looking and retrospective information, and information that covers short-, medium- and long-term time horizons as appropriate.
  4. The Directive removes the possibility for Member States to allow companies to report the required information in a separate report that is not part of the management report.
  5. The Directive requires exempted subsidiary companies to publish the consolidated management report of the parent company reporting at group level, and to include a reference in its legal-entity (individual) management report to the fact that the company in question is exempted from the requirements of the Directive.
  6. The Directive requires companies in scope to prepare their financial statements and their management report in XHTML format and to mark-up sustainability information.

C-2022-1-Zhigalov-06-klein

Figure 6. Nature of double materiality concept. [Click on the image for a larger image]

The CSRD has overall requirements on how to report, general disclosure requirements on how the company has organized and managed itself and topic specific disclosure requirements in the field of sustainability. It should be noted that the company sustainability reporting requirements are much broader than climate risk, e.g., environmental, social, governance and diversity are the topics addressed by the CSRD.

C-2022-1-Zhigalov-07-klein

Figure 7. Overview of the reporting requirements of the CSRD. [Click on the image for a larger image]

Extended reporting requirements that come with the CSRD may require companies in scope of this regulation to start preparing now. Here is an illustrative timeline for companies to become CSRD ready.

C-2022-1-Zhigalov-08-klein

Figure 8. A potential way forward to become CSRD ready. [Click on the image for a larger image]

EU Taxonomy – new financial language for corporates

The EU Taxonomy and the delegated regulation are the first formal steps of the EU to require sustainability reporting in an effort to achieve the green objectives.

Over the financial year 2021, so called large (more than 500 employees) listed entities have to disclose, in their non-financial statement as part of the management report, how their turnover, CapEx and OpEx are split by Taxonomy-eligible activities (%) and Taxonomy-non-eligible activities (%) including further qualitative information.

Over the financial year 2022 these activities need to be aligned with the criteria for sustainability to contribute to the environmental objectives and do no significant harm to other objectives and comply with minimum safeguards. Alignment should then be reported as proportion of turnover, CapEx and OpEx to assets or processes associated with economic activities that qualify as environmentally sustainable. To financial institutions in turn it translates to the requirement to report on the green asset ratio, which in principle is a ratio of Taxonomy-eligible or Taxonomy-aligned assets as a percentage of total assets.

C-2022-1-Zhigalov-09-klein

Figure 9. EU Taxonomy timeline. [Click on the image for a larger image]

The “delegated act” under the Taxonomy Regulation sets out the technical screening criteria for economic activities that can make a “substantial contribution” to climate change mitigation and climate change adaptation. In order to gain political agreement at this stage texts relating to crops and livestock production were deleted, and those relating to electricity generation from gaseous and liquid fuels only relate to renewable, non-fossil sources. On the other hand, texts on the manufacture of batteries and plastics in primary form have been added, and the sections on information and communications technology, and professional, scientific and technical activities have been augmented.

With further updates of the technical screening criteria for the environmental objective of climate mitigation we will also see the development of the technical screening criteria for transitional activities. Those transitional economic activities should qualify as contributing substantially to climate change mitigation if their greenhouse gas emissions are substantially lower than the sector or industry average, if they do not hamper the development and deployment of low-carbon alternatives and if they do not lead to a lock-in of assets incompatible with the objective of climate- neutrality, considering the economic lifetime of those assets.

Moreover, those economic activities that qualify as contributing substantially to one or more of the environmental objectives by directly enabling other activities to make a substantial contribution to one or more of those objectives are to be reported as enabling activities.

The EU Commission estimates that the first delegated act covers the economic activities of about 40% of EU-domiciled listed companies, in sectors which are responsible for almost 80% of direct greenhouse gas emissions in Europe. A complementary delegated act, expected later in early 2022, will include criteria for the agricultural and energy sector activities that were excluded this time around. The four remaining environmental objectives — sustainable use of water and marine resources, transition to a circular economy, pollution prevention and control, and protection and restoration of biodiversity and ecosystems — will be addressed in a further delegated act scheduled for Q1 of this year.

C-2022-1-Zhigalov-10-klein

Figure 10. EU Taxonomy conceptual illustration. [Click on the image for a larger image]

Companies shall disclose the proportion of environmentally sustainable economic activities that align with the EU Taxonomy criteria. The European ([EUR21c]) Commission views that the translation of environmental performance into financial variables (turnover, CapEx and OpEx KPIs) allows investors and financial institutions in turn to have clear and comparable data to help them with their investment and financing decisions. The main KPIs for non-financial companies include the following:

  • The turnover KPI represents the proportion of the net turnover derived from products or services that are Taxonomy aligned. The turnover KPI gives a static view of the companies’ contribution to environmental goals.
  • The CapEx KPI represents the proportion of the capital expenditure of an activity that is either already Taxonomy aligned or part of a credible plan to extend or reach Taxonomy alignment. CapEx provides a dynamic and forward-looking view of companies’ plans to transform their business activities.
  • The OpEx KPI represents the proportion of the operating expenditure associated with Taxonomy-aligned activities or to the CapEx plan. The operating expenditure covers direct non-capitalized costs relating to research and development, renovation measures, short-term lease, maintenance and other direct expenditures relating to the day-to-day servicing of assets of property, plant and equipment that are necessary to ensure the continued and effective use of such assets.

The plan that accompanies both the CapEx and OpEx KPIs shall be disclosed at the economic activity aggregated level and meet the following conditions:

  • It shall aim to extend the scope of Taxonomy-aligned economic activities or it shall aim for economic activities to become Taxonomy aligned within a period of maximum 10 years.
  • It shall be approved by the management board of non-financial undertakings or another body to which this task has been delegated.

In addition, non-financial companies should provide for a breakdown of the KPIs based on the economic activity pursued, including transitional and enabling activities, and the environmental objective reached.

C-2022-1-Zhigalov-11-klein

Figure 11. EU Taxonomy disclosure requirements. [Click on the image for a larger image]

As for challenges companies face when preparing for EU Taxonomy disclosure, the following key implementation challenges are observed in our practice:

  1. administrative burden and systems readiness;
  2. alignment with other reporting frameworks and regulations;
  3. data availability;
  4. definitions alignment across all forms of management reporting;
  5. integration of EU Taxonomy reporting into strategic decision making.

Furthermore, the Platform on Sustainable Finance is consulting ([EUR21b]) on extending the Taxonomy to cover “brown” activities and a new Social Taxonomy. The current Taxonomy covers only things that are definitely “green”, indicating a binary classification. The Platform notes the importance of encouraging non-green activities to transition and suggests two new concepts – “significantly harmful” and “no significant harm”. The aim of a Social Taxonomy would be to identify economic activities that contribute to advancing social objectives. A follow-up report by the Commission is expected soon. The eventual outcome will be a mandatory social dictionary, which will add further to the corporate reporting requirements mentioned above and company processes and company-level and product disclosures for the buy-side (see below). It will also be the basis for a Social Bond Standard.

Conclusion

Evolution of sustainability reporting is happening at a fast pace. Collective efforts on a global and European level help develop the disclosure requirements to make them more coherent and consistent in order to be comparable and reliable. The prototype standards that have been released by now show optimism about leveraging on the existing reporting frameworks for the sake of consistency. Luckily the European and global standard setters prioritized recycling and leveraging on existing reporting frameworks and guidance instead of designing something absolutely new to the wider audience. Sustainability reporting standardization is after all not only much waited activity but also very much a dynamic and multi-centred challenge. All, EU CSRD, EU Taxonomy and IFRS ISSB will ultimately contribute to the availability of high-quality information about sustainability risks and opportunities, including the impact companies have on people and the environment. This in turn will improve the allocation of financial capital to companies and activities that address social, health and environmental problems and ultimately build trust between those companies and society. This is a pivotal moment for corporate sustainability reporting; more updates on the developments will follow most certainly!

Read more on this subject in “Mastering the ESG reporting and data challenges“.

References

[EUR18] European Commission (2018). Communication from the Commission. Action Plan: Financing Sustainable growth. Retrieved from: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52018DC0097

[EUR21a] European Commission (2021). Proposal for a Directive of the European Parliament and of the Council amending Directive 2013/34/EU, Directive 2004/109/EC, Directive 2006/43/EC and Regulation (EU) No537/2014, as regards corporate sustainability reporting

[EUR21b] European Commission (2021). Call for feedback on the draft reports by the Platform on Sustainable Finance on a Social Taxonomy and on an extended Taxonomy to support economic transition. Retrieved from: https://ec.europa.eu/info/publications/210712-sustainable-finance-platform-draft-reports_en

[EUR21c] European Commission (2021). FAQ: What is the EU Taxonomy Article 8 delegated act and how will it work in practice? Retrieved from: https://ec.europa.eu/info/sites/default/files/business_economy_euro/banking_and_finance/documents/sustainable-finance-taxonomy-article-8-faq_en.pdf

[EUR21d] European Commission (2021). Questions and Answers: Corporate Sustainability Reporting Directive proposal. Retrieved from: https://ec.europa.eu/commission/presscorner/detail/en/QANDA_21_1806

[GFAN21] GFANZ, Glasgow Financial Alliance for Net Zero (2021). Amount of finance committed to achieving 1.5º C now at scale needed to deliver the transition. Retrieved from: https://www.gfanzero.com/press/amount-of-finance-committed-to-achieving-1-5c-now-at-scale-needed-to-deliver-the-transition/

[IFR20] IFRS Foundation (2020). Consultation Paper on Sustainability Reporting. Retrieved from: https://www.ifrs.org/content/dam/ifrs/project/sustainability-reporting/consultation-paper-on-sustainability-reporting.pdf

[IOSC21] IOSCO (2021). Media Release IOSCO/MR/16/2021. Retrieved from: https://www.iosco.org/news/pdf/IOSCONEWS608.pdf

Trust by Design: rethinking technology risk

In society, there is a growing call for trust in technology. Just think about the data leaks and privacy issues that hit the news almost on a daily basis. Organizations tackle these issues through risk management practices and implementing controls and measures to ensure meeting the risk appetite of the organization. But the implications go further, last year the Dutch privacy watchdog stated that organizations should be very careful in using Google cloud services due to their poor privacy practices. This is not only challenging for the vendor but also for the clients using the services. Another example is Apple. They are doubling down on privacy in their iCloud and iOS offerings, so users trust them more as a brand, which increases their market share.

This raises several questions: What is trust? When do we decide to trust someone or something? And how can you become trusted? Do we overlook what trust really means, or do we have an innate sense for “trust”? But how does that work for organizations consisting of hundreds or thousands of people and complex business-to-business structures?

Introduction

The questions above seem to be easily overlooked when someone says, “trust me on this one”, or “have some trust in this”. For example, imagine you are buying a used car and the salesperson shows you a car and says “trust me, it is in perfect condition” we first want to look under the hood, open all doors, ask for a maintenance log, and of course do a test drive. Then imagine a colleague with whom you have been working for years tells you to trust them on some work-related topic, you tend to trust that person in an instance. That is looking at trust from a personal perspective. For business to business, the easiest direction to point at are contracts and formal agreements. But these only go so far and do not protect organizations or, maybe to a greater extent individuals against every risk. It’s important to not only look at whether a solution works well, but also if it meets your trustworthiness requirements across the wider value and supply chains. In our hyper-connected world, we rarely see stand-alone technology solutions anymore; we see systems of systems interacting in real-time. The ability to govern technology comprehensively can help you avoid potential ecosystem risks while fostering stronger alliances based on shared governance principles.

The concept of trust

In the audit, we use the saying “tell me, show me, prove it” (or something similar) where we list three ways to support a claim in order of lowest to the highest level of assurance. This implies that trust is the lowest level of assurance which is, strictly speaking, of course true. However, despite this, humanity has built many constructs of trust which we rely on, on a daily basis: money, legal structures, law, and governments, just to name a few. In the book Guns, Germs, and Steel: The Fates of Human Societies by Jared Diamond ([Diam97]), the concept of creating these “fantasy” constructs in which we put a lot of trust, is posited as a cornerstone of human progress.

In the risk management world, an example of trust we often come across is assurance reports. Frameworks such as ISAE, SOC, and ISO are trusted to be relevant and properly applied. These are all tools or constructs that we trust to keep our data, investments, or processes safe. These constructs are used as ways of trusting each other in the B2B world. These types of trust concepts are, to an extent, widely accepted, and rightfully so. However, isn’t it strange that we put so much trust in the authors of the frameworks or the independent auditors that validate these frameworks? Is this a case of blind faith or is it the trust we have and put in these types of constructs based on something more that we might take for granted?

The concept of trust is hard to define. You can look it up in a dictionary and depending on the one you use, the definitions vary. However, you can leave it up to academia to relentlessly structure things. In a meta-analysis, a well-founded concept of trust has been derived ([McKn96]). The act of trusting, or trusting behavior has 5 different precursors, where trusting intention (1) directly causes the trusting behavior. Trusting Intention is the extent to which one party is willing to depend on the other party even though negative consequences are possible. In general, people tend to form trusting intentions based on their beliefs, or trusting beliefs (2). These are formed from current, and prior experiences (dispositional trust (3)). In addition to that, there is a component that is caused by the system (4), and trust in that system (5). The system in this context can be a wide variety of systems given the situation, for example, an IT system or a management system.

Another concept of trust is that individuals want to have an amount of certainty that positive events unfold, and do not like risks that might reduce the certainty of said events. Trust could therefore also be considered as a feeling that there is low exposure to risks. This concept of risk exposure is also used in research to understand technology adoption, and trusting behavior ([Lipp06]). This research mentions predictability and reliability as two core features that can be used to evaluate trust in technology.

Most of these conceptions of trust are based on personal trust, or the trust behaviors of an individual ([Zahe98]). There is however a distinction between personal trust and organizational trust. The latter is considered to be a function of the combined personal trust of the employees of an organization. This seems to indicate that the predictability, reliability, and usability of technology can increase trust in a technology through the reduction of risks towards the potential benefits of using said technology. This, however, does not explain how organizations trust technology or other organizations. On the other hand, organizations consist of individuals that work together, so there is a clear connection between personal and organizational trust ([Zahe98]). There are different views on how this works, and the debate on how trust complements or replaces formal agreements and contracts seems to be ongoing. A lot of research was done into various facets of trust and how it works between two actors (e.g. [Noot97]). What literature does agree on is that trust can work as a less formal agreement between organizations and allows for less complex and costly safeguards when compared for contracts or vertical integration ([Gula08]).

Based on this, we broadly derive that trust will always play an important and above all positive role in organizational and interpersonal relationships (although the exact implications might be not completely understood at this point). It does however show us that trust can complement governance models and operationalizing this concept can be beneficial to organizations on various levels, bringing efficiency gains and maybe even competitive advantage ([Sydo06]). Trust in technology can be achieved by the demonstration of predictable and reliable positive outcomes, and a high degree of usability.

Achieving trust in practice

Now that the theoretical concept is uncovered to a degree, we can look at the practical aspect and a framework that is capable of governing how a trust should operate. First, we should probably set some conditions. As the concept of trust is very broad, we cannot cover the entire topic; we will therefore first look at the internal organization, the way organizations adopt technology and perform innovation and change projects.

Usually, these types of changes are governed by risk management processes that try to optimize the benefits while at the same time reducing the risks of non-compliance, security deficiencies, or reduced process effectiveness.

Risk management” is the term used in most cases, but we also see that “risk management and a lot of other stuff” is sometimes more applicable to reality. With “other stuff” we mean a lot of discussions on risks and mitigation, creating a lot of controls for things that might not really need controls in the first place. Then we come to testing these, sometimes overcomplete, control frameworks, to a degree that some organizations are testing controls for the sake of testing them. Testing is followed by reporting and additional procedures to close gaps that are identified. Usually, this has to be done on top of regular work instead of having embedded or automated controls that just operate within the processes themselves. In various industries, regulators impose more and more expectations regarding the way that risks are managed. In addition, there are increasing expectations from society on how (personal) data is protected, and the way organizations deliver their services. This includes far-reaching digitization and data-driven processes, required to support customer requirements. These expectations, technological advancements, and the ever-increasing competitiveness in the market create a gap between often agile-driven product delivery and risk management. Unfortunately, we also see that, as a natural reflex, organizations tend to impose even more controls on processes which further inflates the “other stuff” of risk management.

From a classical risk management standpoint, risks are mostly managed through the execution of controls and periodic testing of said controls. These controls are usually following frameworks such as ITIL or COSO. In many organizations, this type of work is divided between the so-called first, second, and third lines of defense. Recently we have seen that especially the first and second lines of defense are positioned closer to each other ([IIA20]). In practice, this results in more collaborative efforts between the first and second lines. Regardless of how exactly this structure should be implemented, the interaction between the first two lines of defense is increasingly important: organizations’ risk management practices often struggle to keep up with the momentum of the product teams that are releasing changes, sometimes multiple times a day. These releases can introduce risks such as privacy or contractual exposures that can be overlooked by a delivery-focused first line.

Innovations and technology adoptions are performed by the collective intelligence of teams that have a high-risk acceptance and focus on getting the highest benefit. Collective intelligence can be broadly defined as the enhanced capacity for thought and decision-making through the combination of people, data, and technology. This not only helps collective intelligence to understand problems and identify solutions – and therefore deciding on the action to take – it also implies constant improvement. The experiences of all involved in the collective are combined to make the next solution or decision better than the last. However, risk management practices are required to be embedded within the innovation process to ensure that the risk acceptance of the organization as a whole is not breached. Take for example the processing of personal data by a staffing organization. This can be extremely beneficial and lead to competitive advantages if done properly. However, this is not necessarily allowed in the context of European legislation. This is where risk management plays a significant role in, for example, limiting the usage of personal data. In innovative teams, this is however not always perceived as beneficial. Risk management can therefore be seen as a limiting factor that slows down the organization and makes processes murky and bureaucratic. Unfortunately, this compliance pressure is true and present in a lot of organizations, see Figure 1. There is, however, another perspective that we want to highlight.

C-2022-1-Kumar-01-klein

Figure 1. The issue at hand. [Click on the image for a larger image]

A good analogy is a racing car, which purpose is to achieve the fastest track times. This is achieved by a lot of different parts all working together to reach the fastest time. As racing tracks usually consist of higher and lower speed corners; a strong engine and fast gearbox are not enough to go the fastest. Good control with suspension and brakes that can continue to operate under a lot of stress, is just as important as the engine. This is no different with business teams, they need a powerful engine and gearbox to go forward quickly, but the best cars in motorsports have the best brakes. These roles are performed by an organization’s risk management practice. Data leaks, hacks, and reputational damage can be even more costly than slow time to market. However, there is an undeniable yearn from business teams to become more risk-based as compared to compliance-based.

In an agile environment this is also, or maybe even more, true. To achieve risk management in an agile world, risk management itself needs to become agile. But with the ITIL or COSO focused periodic testing of controls, the outcomes will lag behind. Imagine that testing changes every quarter. Before this has been tested and reports are written, numerous new changes will have been triggered already. With a constantly changing IT landscape, the gap between the identified risks from these periodic controls will no longer be an accurate representation of the actual risk exposure. This is called the digital risk gap, which is growing in a lot of organizations.

To close, or at least decrease the gap, the focus should be on the first line process; the business teams that implement changes and carry forward the innovations. It is most efficient to inject risk management as early in the innovation process as possible. In every step of the ideation, refinement, and planning processes, risk management should at least be in the back of the minds of product owners and product teams.

To achieve this risk awareness and to close the digital risk gap a framework has been developed that incorporates concepts from agile, software development, and risk management to provide an end-to-end approach for creating trust by reducing risks in a proactive and business-focused approach. This is what we call Trust by Design, and takes the concepts from the integrated risk management lifecycle (see Figure 2) into practice.

C-2022-1-Kumar-02-klein

Figure 2. The integrated risk management lifecycle. [Click on the image for a larger image]

The goal of Trust by Design is to achieve risk management practices in an agile world where trust is built by design into the applications and solutions by the people who are building and creating them. Due to the high iterative and fast-paced first-line teams that are almost coming up with new cool ideas on a weekly basis, the second line struggles to keep up. To change this, we should allow the first line teams to take control of their own risks, and build a system of trust. The second line can digitize the policies into guardrails and blueprints that the first line can use to take all the risk that is needed as long as the risk appetite of the organization is not breached.

Looking at how trust is achieved, there are three main principles we want to incorporate into the framework. The first is predictability. This can be achieved by standardizing the way that risks are managed because a highly standardized system functions in a highly predictable way. We strongly believe that 80% of the risk procedures that are taken within organizations, which seem to be one-of or custom procedures, can in fact be standardized. This is, however, not achieved overnight, and can be seen as an insurmountable task. The Trust by Design framework takes this transition into account by allowing processes to continue as they are at first but standardizing on the go. Slowly, standardization will be achieved, and trust will grow because the procedures are much more manageable and can be easily adapted to new legislation or technological advances.

Secondly, there is reliability. A standardized system allows for much better transparency across the board, both on an operational and a strategic level. But determining if processes are reliably functioning calls for transparency and well-articulated insights into the functioning of these processes. By using powerful dashboards and reporting tools pockets of high risk, developments can be made insightful, and even be used as steering information. Imagine if an organization is undertaking 100 projects of which 50 are processing highly sensitive personal data. Is that in line with the risk appetite, or should it be reconsidered? By adopting the Trust by Design framework these types of insights are available.

Lastly, there is a usability component. This is how the business teams perceive the guardrails and blueprints that are imposed. The Trust by Design approach is meant to take risks into account at the start of developing applications or undertaking changes. To achieve this, there are three basic building blocks that need to be defined. The first is the process itself, which also includes the governance, responsibilities between various functions, and the ownership of the building blocks themselves is defined. The second building block is the content, consisting of the risks, control objectives, and controls. And the third is where this content is kept, the technology to tap into the content and enable the process.

Following the three components of trust, the Trust by Design framework aims to reduce the complex compliance burden for business teams and increase the transparency for decision-makers and the risk management function within organizations. The Trust by Design framework aims to reduce the two most time-consuming phases in risk management for innovations and developments; determining the scope of the measures to be taken, and implementing said measures, while at the same time creating trust within the organization to leverage the benefits.

In practice, the framework is meant to be embedded within the development lifecycle of innovation, development, and changes. In Figure 3 the high-level framework overview is shown and consists of four major stages.

The first is the assessment stage, where through standardization business teams use business impact assessments and the following deeper level assessments to determine the risk profile of innovation or development. These are standardized questionnaires that can be applied to a technology, product, or asset throughout the development journey. The results of the questionnaires are used to funnel risk assessments and lead to a set of well-articulated risks, for which control objectives are defined. These control objectives can then be used to determine the risk appetite that the business is allowed, or willing to take, resulting in controls/measures. In the third stage, these can be implemented into the product at the right stages of the development cycle. These controls/measures can be seen as a set of functional/technical requirements that are added from a risk-based perspective. By applying this approach, by the time a development is completed or migrated into production the major risks are already mitigated in the development process itself. By design.

Lastly, there is the outcome where products are “certified” as the assessments, risks, the associated measures and the implementation can be transparently monitored.

C-2022-1-Kumar-03-klein

Figure 3. High-level Trust by Design framework. [Click on the image for a larger image]

These stages are constantly intertwined with the development processes. As circumstances change, so can the assessments or controls. Moreover, in environments where stringent controlling of certain risks is not necessary, guidance or blueprints can be part of the operation stage, helping innovation teams or developers with certain best practices based on the technology being applied, or the platform being used.

A case study

At a major insurance company, this approach has been adapted and implemented to enable the first line in taking control of the risks. The approach proposed at this organization is based on four steps:

  1. a high-level scorecard for a light-touch initial impression of the risks at hand,
  2. a deep dive into those topics that actually add risk,
  3. transparently implementing the measures to mitigate risks, and
  4. monitoring the process via dashboards and reports.

By using a refined scorecard on fifteen topics that cover the most important risk areas, product teams understand what risks they should take into account during the ideation and refinement processes. But also, which risks are not important to the feature being developed. This prevents teams from being surprised when promoting a feature to production, or worse once it is already out in the open. By applying risk-mitigating measures as close to the process step where the risk materializes, an acuminate risk response is possible, preventing over or under control. This requires specific control designs and blueprints that allow teams to implement measures that fit their efforts. The more precise these blueprints are the less over-controlled teams will be. It is important to note that for some subjects organizations might decide that over-controlling is preferred to the risk of under-controlling depending on the risk appetite.

Based on the initial impressions the scorecard is used to perform a more specific assessment of the risks and the required measures. This deep-level assessment sometimes requires input from experts on topics such as legal or security. In several organizations, a language gap exists between the first and second lines. One of the product owners we spoke to said, “I do not care about risk management, I just want to move forward as quickly as possible. For me this is only paperwork that does not help me accomplish my objectives“. Risk management consultants are also often guilty of speaking too much in a 2nd line vocabulary. It is important that we understand the 1st line and their objectives towards effectively designing a scorecard. In a way, this type of scorecard can be seen as the “Google Translate” between the 1st and 2nd lines. By asking the right questions in the right language the risks can become more explicit, and the required measures to mitigate the risk can be more specific. This reduces overcontrolling and leads to lowered costs and more acceptance from the product teams. The communication between the first and second lines is imperative to a successful implementation of a Trust by Design approach. This is also in line with the earlier mentioned IIA paper, in which the second line will become a partner that advises the first line, instead of an independent risk management department.

Since true agile is characterized by fast iterations and does not plan ahead too far, using a scorecard with an underlying deep level assessment helps product teams to quickly adapt to changes in the inherent risk of the change at hand. This “switchboard” approach allows much more agility, and still allows organizations to mitigate risks.

Developing this type of “switchboard” that leads users from high-level risks to more specific risks and the required standard measures, should be done iteratively. We also learned during our implementation that there is no way to make such a system exhaustive. At best we expect that 80% of the risks can be covered by these standard measures. The remainder will require custom risk mitigation and involvement of the 2nd line.

Implement measures, measure the implementation

Once the specific risks are identified, the agile or scrum process can be followed to take measures into account as if they are (non) functional requirements. This way, the regular development or change process can be followed, and the development teams can work on these in a way that is familiar to them.

The technology used by our insurance client to manage projects is Azure DevOps. It is used for both developments and more “classic” project management. This tooling allowed us to seamlessly integrate with the process used by teams in their daily routines. In addition, by structuring the data from the scorecard, risks were made transparent to all lines of defense. Through structured data, it is possible to create aggregations or to slice and dice data specifically for different levels of management and stakeholders. Using PowerBI or the standard Azure DevOps dashboarding decisions regarding risk mitigation and risk acceptation is open for all to see. In addition, the Power platform can be considered to further automate the processes and use powerful workflows to digitize the risk policies and inject these directly into the change machine of the 1st line.

How about the controls?

This leaves us with one more question, how do we connect these measures to the controls in the often exceptionally large and complex control frameworks? Especially since ITIL/COSO worlds are looking back, by periodically (weekly, monthly, etc.) testing controls using data and information from events that have passed. Based on this testing, the current, or even future situation is inferred. Agile is more responsive, in the moment and per occurrence. So, this inference can no longer be easily applied. Of course, large organizations cannot simply change their risk universes or control frameworks. So how do we connect these measures to controls?

This is a difficult question to answer, and counterintuitively, one to at first ignore. Once the first line starts to work with the standard measures, gaps between the operational risk management and the control testing world will become apparent. These can then be fixed relatively quickly by adapting the measures to better align with the controls. In other cases, we expect that controls also need to be reassessed. Especially given huge control frameworks of highly regulated organizations this will also be an opportunity to perform rationalization and cull the sometimes overcontrolled nature of organizations. In addition, it also presents the opportunity to further explore options of control automation that can lead to further savings and efficiency.

Measure or control?

We make the distinction between control and measure. In the first stages of implementing the approach, this will be apparent. Controls are periodically tested, whereas measures will be an explicit part of the development process, just like any (non)functional requirement. But we expect that this distinction will fade as the change process itself will eventually become the only control to mitigate risks in the future.

Conclusion

In our vision, a major part of risk procedures will migrate into the changing realm instead of control testing procedures that are static in nature (see Figure 4). As organizations become more software-development oriented (in some shape or form), it allows us to reconsider the approach to testing controls and mitigating risks. Imagine a bank changing the conditions on which someone can apply for a loan, nowadays this involves a lot of manual checks and balances and testing before all kinds of changes are applied and manual interventions are needed. Since the risks of these changes are not holistically considered during the development process, controls are needed in the production environment to ensure the proper functioning of the systems after the change. The digitized organizations of the future will converge their risk processes into the release cadence and will never worry about testing controls other than access controls and security. They know the as-is state, and they know the delta, that which is being added, changed, or removed is done according to the appropriate risk appetite by design. Finally, Trust by Design will also provide the foundation to develop digital trust metrics. Digital trust captures the confidence that citizens have in the ability of digital technology, service, or information providers to protect their interests. This allows internal organizations, their clients, and society to trust the processes and technology.

C-2022-1-Kumar-04-klein

Figure 4. The release process as a pivotal function in managing risks. [Click on the image for a larger image]

References

[Diam05] Diamond, J. M. (2005). Guns, Germs, and Steel: The Fates of Human Societies. New York: Norton.

[Gula08] Gulati, R. & Nickerson, J. A. (2008). Interorganizational Trust, Governance Choice, and Exchange Performance. Organization Science 19(5), 688-708.

[IIA20] Institute of Internal Auditors (IIA) (2020). The IIA’s Three Lines Model: An update of the Three Lines of Defense, Retrieved from: https://global.theiia.org/about/about-internal-auditing/Public%20Documents/Three-Lines-Model-Updated.pdf

[Lipp06] Lippert, S. K., & Davis, M. (2006). A conceptual model integrating trust into planned change activities to enhance technology adoption behavior. Journal of Information Science, 32(5), 434-448.

[McKn96] McKnight, D. H., & Chervany, N. L. (1996). The Meanings of Trust. Minneapolis, Minn.: Carlson School of Management, Univ. of Minnesota.

[Noot97] Nooteboom, B., Berger, H., & Noorderhaven, N. G. (1997). Effects of Trust and Governance on Relational Risk. Academy of Management Journal, 40(2), 308-338

[Sydo06] Sydow, J. (2006). How can systems trust systems? A structuration perspective on trust-building in inter-organizational relations. In R. Bachmann & A. Zaheer (Eds.), Handbook of Trust Research (pp. 377-392). Cheltenham/Northampton: Edward Elgar.

[Zahe98] Zaheer, A., McEvily, B., & Perrone, V. (1998). Does trust matter? Exploring the effects of interorganizational and interpersonal trust on performance. Organization Science, 9(2), 141-159.

Een Internal Control Framework in een complexe uitvoeringsorganisatie

Complexe organisaties hebben behoefte aan inzicht in hun stelsel van interne beheersing. Op basis van dat inzicht kan de werking van dat stelsel periodiek worden getoetst en kan dat stelsel continu worden aangepast aan veranderende omstandigheden. Steeds vaker is er een roep om een zogenaamd Internal Control Framework op te stellen. Maar wat is een Internal Control Framework eigenlijk? In dit artikel bespreken we een voorbeeld uit de weerbarstige praktijk van de meest complexe en grootste bedrijfsvoeringsdienstverlener van Nederland: het Politiedienstencentrum.

Het Politiedienstencentrum

Het Politiedienstencentrum (PDC) heeft als opdracht het gehele Nederlandse politiekorps (regionale eenheden, Landelijke Eenheid, Politieacademie en Landelijke Meldkamer Samenwerking) te faciliteren met hoogwaardige bedrijfsvoering. De positie van het PDC in het korps is weergegeven in figuur 1. Het PDC draagt daarmee bij aan goed politiewerk. Het PDC regelt onder andere de salarissen, alle inkooptrajecten, de huisvesting, de ICT-voorzieningen, de voer- en vaartuigen, wapens en munitie, kleding en uitrusting en de externe communicatie, inclusief de productie van tv-programma’s. Hiervoor zijn in het PDC zeven diensten ondergebracht: Verwerving, Financiën, Facility Management, Human Resource Management, Informatie Management, ICT en Communicatie. In totaal zijn dagelijks meer dan 7000 collega’s in het PDC aan het werk. Het PDC heeft een jaarbudget van bijna 6 miljard euro. De organisatie is relatief jong; zij is in 2012 ontstaan uit de fusie van de regionale politiekorpsen tot één nationale politie.

C-2022-1-Jacob-01-NL-klein

Figuur 1. Hoofdstructuur politieorganisatie. [Klik op de afbeelding voor een grotere afbeelding]

Twee benaderingen van een Internal Control Framework

Een Internal Control Framework (ICF) is een algemeen toepasbaar kader waarin alle typen beheersingsmaatregelen in onderlinge samenhang worden weergegeven. Het bekendste model is het COSO-framework dat zich richt op het gehele interne beheersingssysteem, bekend als COSO II of Enterprise Risk Management Framework.

Maar wat betekent dat als we spreken van een ICF voor een complexe uitvoeringsorganisatie als het PDC? Die opdracht kreeg de Planning & Control-organisatie begin 2021 van de directie van het PDC. Wij stelden ons de praktische vraag: wat is de verschijningsvorm van een ICF? Is dat een document, en zo ja, hoe dun of dik is dat dan? En wat is dan de opbouw, de structuur? Om op deze vragen een antwoord te krijgen hebben we vele gesprekken gevoerd, binnen en vooral ook buiten de politie. Want waarom zouden wij het wiel opnieuw uit hoeven te vinden? Dat heeft een verrassend resultaat opgeleverd: een ronde langs een aantal grote private en publieke organisaties heeft niet één bruikbaar voorbeeld-ICF opgeleverd dat de politie kon gebruiken. Wel bleken er twee stromingen:

  1. een ICF is de optelsom van de beschrijving van alle beheersingsmaatregelen in de processen; of
  2. een ICF is een hoofdlijnendocument met een beschrijving van de wijze waarop de interne beheersing binnen de organisatie is vormgegeven.

Het PDC heeft hierin een keuze gemaakt, een pragmatische insteek die past bij het huidige ontwikkelniveau van beheersing binnen het PDC. Het ICF is voor het PDC een hoofdlijnendocument geworden waarin voor de gehele organisatie is beschreven hoe de interne beheersing is vormgegeven. Het geeft overzicht, geeft inzicht in samenhang, creëert een gemeenschappelijke taal en verwijst naar bronnen binnen onze organisatie met meer details. En daarmee is het document ook geschikt om op bestuurlijk niveau te bespreken. Het ICF bevat de leidende principes van de interne beheersing binnen het PDC en is kaderstellend voor alle typen bedrijfsprocessen: of het nu om de ontwikkeling van het vastgoed gaat, het beheer van kleding en uitrusting of de productie van tv-programma’s. Het ICF richt zich primair op het lijnmanagement dat verantwoordelijk is voor de interne beheersing en secundair op de controllers die het lijnmanagement daarin ondersteunen. Ten slotte is het ICF van waarde voor onze interne en externe accountant.

Doelstelling van beheersing

Het centrale thema van het ICF is beheersing. Beheersing gaat primair over het bereiken van de gestelde doelen binnen de wettelijke en budgettaire kaders, ondanks de risico’s die dat kunnen belemmeren of verhinderen. Dat kunnen operationele doelen zijn, doelen op het gebied van het naleven van wet- en regelgeving, financiële doelen en verantwoordingsdoelen of beleids- en ontwikkeldoelen. Beheersing dient niet alleen de betrouwbaarheid van de financiële verantwoording, maar bijvoorbeeld ook een goede balans tussen de going-concernactiviteiten en de vernieuwing van de dienstverlening.

Het framework

Met een kleine werkgroep heeft het ons iets meer dan een half jaar gekost om het ICF voor het PDC op te stellen, af te stemmen met een representatieve groep van collega’s (lijnmanagement, controllers, ondersteuners) en hierover op directieniveau een weloverwogen besluit te nemen. De facto was het afstemmingsproces de eerste stap om het document levend te krijgen binnen onze organisatie. Na de formele besluitvorming volgden vele besprekingen op alle managementlagen binnen onze organisatie. Wat enorm hielp is dat het ICF vooral een beschrijving is van wat er al breed in onze organisatie aan interne beheersing is geregeld maar nog nooit in samenhang in kaart was gebracht. Het is slechts op onderdelen iets nieuws dat moet worden geïmplementeerd.

Al met al heeft het PDC na een jaar een werkend en levend ICF voor zijn organisatie als geheel. Ons streven was een toegankelijk document van maximaal 20 pagina’s op te leveren. Uiteindelijk hebben we de beschrijving van de interne beheersing van onze organisatie vormgegeven met 7 hoofdstukken in 30 pagina’s ([sJac21]). In figuur 2 geven wij een korte schets van elk van deze hoofdstukken. Het framework is vooral een beschrijving van hoe het nu is georganiseerd en is slechts in zeer beperkte mate een streefbeeld. Het is nadrukkelijk geen model dat moet worden geïmplementeerd. Het brengt zaken bijeen, consolideert, maakt transparant, geeft sturing.

C-2022-1-Jacob-02-NL-klein

Figuur 2. Belangrijkste ICF-onderdelen. [Klik op de afbeelding voor een grotere afbeelding]

1. Typologie van het PDC

Het klinkt wellicht wat ouderwets, maar Starreveld heeft ons wederom en na al die jaren geholpen ([Berg20], [Leeu14]). De typologie van de bedrijfsfuncties van het PDC is divers en de aard van de beheersing daarmee ook: van het verwerken van salarissen tot aan het beheer van het wagenpark, de logistiek van geweldsmiddelen, de ontwikkeling en het beheer van politiespecifieke informatiesystemen en de productie van een tv-programma als Opsporing Verzocht. We schetsen het PDC in zijn context, de typologie van de verschillende bedrijfsfuncties, de cultuur, de spanning tussen ‘going concern’ en vernieuwing, de strategische ontwikkelingen en de ontwikkelingen op het gebied van IT.

2. Planning & Control

De Planning & Control-cyclus is voor het PDC de kern van de interne beheersing. Van begroting tot aan jaarverantwoording. Onze cyclus is verankerd in die van het korps en de jaarplannen sluiten aan op de doorontwikkeling van onze strategische visie. Vooral het opstellen van het jaarplan van het PDC is uiterst complex: hierin komen alle portefeuilleplannen van de operationele portefeuilles samen met de ‘going concern’-behoefte. Vernieuwing van de politie werkt namelijk langs de lijn van portefeuillemanagement en vrijwel elke vernieuwing in ons operationeel proces heeft impact op de bedrijfsvoering. Inzicht in die impact voorafgaand aan het jaarplanproces is essentieel. De Planning & Control-organisatie van het PDC is de afgelopen jaren vanuit een relatief grote achterstand bezig met een inhaalslag op het gebied van professionalisering, organisatie-inrichting en bemensing.

3. Risicomanagement

Risicogestuurd werken is verankerd in de genen van het politiewerk. Wij doen niet anders. Maar het risicomanagement in de bedrijfsvoering is daarbij korpsbreed achtergebleven. Vandaar dat in ons framework expliciet aandacht is gegeven aan het risicomanagementproces, het risicogestuurd werken, de typologie van risico’s, de wijze van inschatting van risico’s, onze risicobereidheid en de verschillende rollen in het risicomanagement.

4. Procesregie

Het PDC is vanaf de start in 2012 georganiseerd in bedrijfsvoeringskolommen. Elke dienst (HRM, Financiën, Facility Management, et cetera) had primair als taak om zijn eigen processen op orde te brengen. Veel processen spelen zich echter inmiddels niet meer af binnen één kolom. We onderscheiden de algemeen bekende ketens zoals Procurement to Pay (P2P) en ook politiespecifieke ketens als Bevoegd Bekwaam en Toegerust als het gaat om geweldsmiddelen (dienstwapen, wapenstok, pepperspray, et cetera). Het sturen op de verbetering van de PDC-brede werkstromen wordt steeds belangrijker. In het ICF geven we aandacht aan de wijze waarop we dat doen, voor welke werkstromen en wat de rollen zijn van werkstroomeigenaar en werkproceseigenaar.

5. Beheersingsmaatregelen in de processen

Vooral voor onze dienstverlening is dit het belangrijkste onderdeel van het framework. We beschrijven ons stelsel van beheersingsmaatregelen in de processen die ten grondslag liggen aan de producten en diensten van het PDC. Onze processen leggen we vast onder architectuur in BizzDesign. Het streven is om ook alle key controls van de voornaamste processen onder architectuur vast te leggen. We zijn begonnen met het vastleggen van alle key controls gericht op de financiële verantwoording en breiden dat nu uit naar alle processen die primair gericht zijn op het leveren van onze producten en diensten. Helaas bevat BizzDesign geen Governance, Risk & Compliance (GRC)-module. Daar worstelen we nog mee: de wijze waarop we de key control moeten modelleren in BizzDesign is nog niet optimaal. In het ICF duiden we de samenhang tussen procesbeheersing, kwaliteitsborging, kwaliteitsmeting en dienstverlening. Daar bestaat namelijk veel overlap tussen. Beide werelden bevatten methoden en technieken voor het op beheerste wijze leveren van producten en diensten die voldoen aan vooraf gedefinieerde KPI’s. Wij streven ernaar om deze samen te laten komen en organisatorisch te integreren, al was het maar door één taal te creëren. Ten slotte schetsen we de invloed van cultuur en gedrag op onze interne beheersing. We gebruiken daarbij de modellen van Quinn en Cameron ([Came14]) en van prof. Muel Kaptein ([Kapt03]; zie ook [Bast15]). Het model van Quin helpt ons om beheersingsmaatregelen te ontwerpen en te implementeren die aansluiten op onze dominante cultuur. Het model van Kaptein helpt ons om inzicht te krijgen in het potentiële effect van Soft Control- instrumenten op het gewenste gedrag van medewerkers.

6. Voldoen aan wet- en regelgeving

Het PDC heeft te maken met een grote diversiteit aan wet- en regelgeving. Of het nu gaat om alle regelingen op het gebied van arbeidsvoorwaarden, financieel beheer, privacy of de Wet wapens en munitie, het ICF biedt algemene handvatten hoe hiermee om te gaan. Het ICF bevat geen concrete integratie van alle wet- en regelgeving, daarvoor is het ICF te generiek van opzet.

7. Verbetercycli

De politie investeert fors in de lerende organisatie. Dat is nodig om de doelen te behalen. Hiervoor zijn meerdere verbetercycli ingericht die nauw met elkaar samenhangen en op elkaar zijn afgestemd. Ontwikkeling op persoonlijk vlak en binnen het team of de netwerkstructuur vormt de basis van de verbetercycli. Op sector- of dienstniveau wordt gewerkt met een kwaliteitsmanagementsysteem om de kwaliteit van de dienstverlening te toetsen, meten en optimaliseren. En onze systematiek van regelmatige Control Self-Assessments (CSA’s) is een belangrijk verbeterinstrument (zie figuur 3). Als onderdeel van de twee hoofdvragen van de CSA (‘Doen we de juiste dingen?’ en ‘Doen we de dingen goed?’) worden de volgende vragen geadresseerd:

  • Worden kritieke risico’s in voldoende mate beheerst?
  • Handelen we conform criteria en normen?
  • Hebben we juiste stuur- en verantwoordingsinformatie?
  • Sturen we op de gewenste gedrag-risicoverhouding?

Die CSA-systematiek is de basis voor het jaarlijkse In Control Statement van de politie, zoals weergegeven in ons jaarverslag. Op het niveau van het PDC als geheel is de verbetering van de dienstverlening geborgd in de Planning & Control-cyclus.

C-2022-1-Jacob-03-NL-klein

Figuur 3. Objecten van Control Self-Assessment. [Klik op de afbeelding voor een grotere afbeelding]

Noodzakelijke verdieping voor de IT-organisatie

De politie wordt steeds meer een IT-gedreven organisatie. Het aandeel van IT in de totale bedrijfsvoering en in het totale politiewerk neemt elk jaar toe. Dat rechtvaardigt aparte aandacht voor de control op gebied van IT. Ons ICF is te generiek van aard om direct te kunnen toepassen op onze IT-organisatie. Daarom hebben we een specifiek framework gemaakt voor de ontwikkeling van onze politiesystemen en zijn we begonnen aan de implementatie van een framework voor de IT-beheerprocessen. Doel is om binnen het gehele IT-landschap (en de daarbij horende IT-lagen) van de IT-organisatie de risico’s te identificeren en aan de hand van passende maatregelen te beheersen, rekening houdend met geldende wet- en regelgeving. De politie heeft voor het beheer van de IT-processen de Baseline Informatiebeveiliging (BIO, [BIO20]) als framework geadopteerd. De BIO betreft een gestandaardiseerd normenkader gebaseerd op de internationale ISO-standaarden NEN-ISO/IEC 27001:2017 en NEN-ISO/IEC 27002:2017 voor de Nederlandse overheid ter beveiliging van informatie(systemen). De BIO geeft richting aan de concretisering van de ISO-normen naar concrete beheersingsmaatregelen.

Ervaringen van KPMG met de BIO-implementatie bij andere overheidsorganisaties tonen aan dat het implementatietraject begint bij het formuleren van de doelstelling door het topmanagement en de behoefte om de mate van beheersing te vertalen naar het jaarlijkse internal control statement in het jaarverslag, specifiek voor onze IT-processen. De gewenste reikwijdte van dat internal control statement bepaalt de scope van het implementatieproces, dat doorgaans gefaseerd wordt uitgevoerd. Nadat de scope is bepaald, wordt per IT-proces een koppeling gemaakt naar het BIO-framework om te inventariseren welke BIO-maatregelen minimaal geïmplementeerd dienen te zijn.

Vervolgens wordt met een gap-analyse beoordeeld in hoeverre de bestaande beheersingsmaatregelen in lijn zijn met de gewenste beheersingsmaatregelen vanuit de BIO-normen. Bij organisaties met een beperkte mate van volwassenheidsniveaus op het gebied van IT-control is het in de praktijk vaak een grote uitdaging om de koppeling te maken tussen de bestaande maatregelen en de BIO-normen. Bovendien zien we vaak dat het eigenaarschap van de maatregelen en de verantwoordelijkheid voor het testen daarvan niet eenduidig zijn vastgelegd. Om de complexiteit van de mapping tussen bestaande beheersingsmaatregelen en het voldoen aan de BIO-maatregelen aanzienlijk te verminderen, biedt een GRC-tool uitkomst. Een randvoorwaarde is wel om bij de inrichting van deze tool een eenduidige structuur te hanteren bij de vastlegging en documentatie van de beheersingsmaatregel, inclusief een link naar relevante BIO- en ISO-standaarden, waar de beheersingsmaatregelen op zijn gebaseerd. Deze mapping is belangrijk omdat één beheersingsmaatregel meerdere BIO- en ISO-normen kan raken.

Figuur 4 geeft de KPMG-aanpak van de BIO-implementatie op hoofdlijnen weer, onderverdeeld in vijf fasen en vijf thema’s, waarvan het thema ‘beheersingsraamwerk’, ofwel het eigenlijke ICF voor de IT-beheeromgeving, het belangrijkste thema is.

C-2022-1-Jacob-04-NL-klein

Figuur 4. Gefaseerde invoeringswijze van de BIO. [Klik op de afbeelding voor een grotere afbeelding]

De implementatie van de BIO is een complex proces waarvan de duur onder andere wordt beïnvloed door:

  • de gewenste scope van het traject (gehele PDC, gehele IT-organisatie, specifieke IT-onderdelen, et cetera);
  • de volwassenheid van de IT-omgeving en de kwaliteit van de vastlegging van de IT-processen met de daarin opgenomen beheersingsmaatregelen;
  • de mate waarin de bestaande beheersingsmaatregelen al voldoen aan de BIO-normen en al dan niet periodiek worden getoetst;
  • de binnen de organisatie beschikbare kennis van en ervaring met het vastleggen en toetsen van beheersingsmaatregelen in het algemeen en de BIO-standaard in het bijzonder;
  • de mate waarin ook buiten de IT-organisatie een bijdrage wordt geleverd aan het traject;
  • de beschikbare resources (menskracht en geld).

In aanvulling op de BIO is het noodzakelijk om ook specifieke IT-risico’s te beheersen. We zien bij streng gereguleerde organisaties dat specifieke internal control frameworks worden ontwikkeld en geïmplementeerd voor bepaalde IT-processen en/of onderdelen van de IT-infrastructuur, ter beheersing van specifieke IT-risico’s. Bijvoorbeeld voor het proces van het overbrengen van wijzigingen naar de productieomgeving met deployment tooling, zoals Azure DevOps of AWS, of voor het deel van het IT-netwerk waarlangs het verkeer van en naar buiten wordt geregeld.

Conclusie: wat levert een ICF de politie op?

Het ICF is voor het PDC een instrument om de interne beheersing op een hoger plan te brengen. De Planning & Control-organisatie biedt het ICF aan als een menulijst. Er bestaan grote verschillen tussen en ook binnen de diensten van het PDC als het gaat om de typologie van de bedrijfsfuncties en als het gaat om de mate van volwassenheid op het gebied van interne beheersing. Elke PDC-dienst kiest, mede op basis van zijn eigen Control Self-Assessment, een of meerdere onderwerpen uit het ICF waarmee de dienst zijn interne beheersing verder kan verbeteren.

Het ICF is een politiespecifiek framework. Op elk van de onderdelen is verdieping nodig. Zo krijgt de beheersing van onze processen pas echt betekenis als de key controls per proces toegankelijk zijn vastgelegd. Voor onze IT-omgeving is het van belang dat we het ICF verder ontwikkelen op basis van de BIO-normen en daarbij de link leggen naar de verschillende lagen in de IT-infrastructuur en specifieke IT-processen met hoog risico. Op die manier blijven wij de risico’s adequaat beheersen.

Voor het management van het PDC biedt het ICF overzicht en inzicht, en is het een hulpmiddel bij het sturen op de verbetering van de interne beheersing. En we hebben nu een gemeenschappelijke taal als het gaat om interne beheersing. Dat schept rust, regelmaat en (bestuurlijke) reinheid.

Literatuur

[Bast15] Basten, A.R.J., Bekkum, E. van & Kuilman, S.A. (2015). Soft controls: IT General Controls 2.0. Compact 2015/1. Geraadpleegd op: https://www.compact.nl/articles/soft-controls-it-general-controls-2-0/

[Berg20] Bergsma, J. & Leeuwen, O.C. van (2020). Bestuurlijke informatieverzorging: Typologieën. Groningen: Noordhoff Uitgevers.

[BIO20] BIO (2020, 17 juni). Baseline Informatiebeveiliging Overheid, versie 1.04zv. Geraadpleegd op: https://bio-overheid.nl

[Came11] Cameron, K.S. & Quinn, R.E (2011). Onderzoeken en veranderen van organisatiecultuur: gebaseerd op model van concurrerende waarden. Amsterdam: Boom.

[Kapt03] Kaptein, M. & Kerklaan, V. (2003). Controlling the ‘soft controls’. Management Control & Accounting, 7(6), 8-13.

[Leeu14] Leeuwen, O.C. van & Bergsma, J. (2014). Bestuurlijke informatieverzorging: Algemene grondslagen Starreveld. Groningen: Noordhoff.

[sJac21] s’Jacob, R.A. (2021). The Internal Control Framework PDC. [Het ICF is openbaar en op verzoek verkrijgbaar.]

An Internal Control Framework in a complex organization

Complex organizations need insight in their system of internal control. Based on this insight, the operation of that system can be tested periodically and continually adjusted to changing circumstances. More and more there is a call for compiling a so-called Internal Control Framework. In this article we discuss an example from the unruly practice of the most complex and largest business management service provider of The Netherlands: the Politiedienstencentrum (PDC, Police Services Centre).

The Politiedienstencentrum (PDC, Police Services Centre)

The assignment of the Politiedienstencentrum (PDC, Police Services Centre) is to facilitate the entire Dutch police force (Regional units, National Unit, Police Academy and Landelijke Meldkamer Samenwerking [LMS, National Control Room Cooperation]) with high quality business management. The position of the PDC in the force is depicted in Figure 1. This is how the PDC contributes to sound police work. The PDC regulates Payroll, Purchasing, Housing, IT, Vehicles & Vessels, Weapons & Ammunition, Clothing & Equipment and External Communication, including the production of TV programs. To this end, the PDC houses seven departments: Purchasing, Finance, Facility Management, Human Resource Management, Information Management, IT and Communication. Over 7,000 colleagues work at the PDC on a daily basis. The PDC has an annual budget of nearly EUR 6 billion. The organization is relatively young, established in 2012 from the merger of the regional police forces into a single national police force.

C-2022-1-Jacob-01-EN-klein

Figure 1. Main structure of the police organization. [Click on the image for a larger image]

Two approaches of an Internal Control Framework

An Internal Control Framework (ICF) is a generally applicable framework in which all types of controls are displayed in interdependence. The most well-known model is the COSO framework that focuses on the entire internal control system, known as COSO II or Enterprise Risk Management (ERM) Framework.

But what is the impact of an ICF for a complex organization such as the PDC? At the beginning of 2021, that assignment was given by PDC management to the Planning & Control unit. We asked ourselves a practical question: what does the ICF look like? Is it a document and if so, how thin, or thick is it? And how is it structured? To provide an answer to all these questions, we have had many conversations, within and especially outside the police organization. Because why would we want to reinvent the wheel? A surprising result: a tour along a number of large private and public organizations did not deliver one useful ICF sample that the police could use. However, there appeared to be two trends:

  1. an ICF is the sum of the description of all controls in the processes, or
  2. an ICF is a document in outlines with a description of the way in which Internal Control within the organization has been designed.

The PDC has made a choice, a pragmatic approach that fits the current level of control within the PDC. The ICF has become a document in outlines for the PDC which describes how Internal Control is designed for the entire organization. It provides an overview and insight into its coherence, creates a common language with terminology and refers to sources within our organization for more details. And with that the ICF document is also eligible for discussion at executive level. The ICF contains the leading principles of Internal Control within the PDC and is setting the frame for all types of business processes: whether it concerns the development of real estate, the management of clothing and equipment or the production of TV programs. The ICF focuses primarily on business management that is responsible for Internal Control and secondary on controllers that support business management. Finally, the ICF is of value to our internal and external auditors.

Objective of control

The central theme of the ICF is control. Control is primarily about achieving the set objectives within legal and budgetary restrictions, despite the risks that could obstruct or prevent this. These can be operational goals, compliance and regulatory goals, financial objectives and accountability goals or policy objectives and development goals. Control does not solely serve the reliability of the financial accountability; it also serves a sound balance between the going concern activities and the renewal of services.

The framework

It took us a little over six months with a small group of professionals to prepare the ICF for the PDC, to tune in with a representative group of colleagues (business management, controllers, staff) and to take a well-considered decision at board level. De-facto the alignment process was the first step to get the document alive within our organization. After formal decision-making, many discussions followed at all management levels within our organization. The fact that the ICF is mainly a description of what has been arranged organization-wide for Internal Control but has never been mapped in conjunction helped us tremendously. Only a few new parts need to be implemented.

All-in-all, after one year the PDC has a working and alive ICF for its entire shared services organization. Our aim was to deliver an accessible document of up to 20 pages. Ultimately, we have shaped the description of Internal Control of our organization in 7 chapters and 30 pages ([sJac21]). Figure 2 shows a rough sketch of each of these components / chapters. Especially the framework is a description of how things are organized at the moment and only to a very limited extent a target image. This is emphatically not a model that needs to be implemented. It brings things together, consolidates, makes transparent and gives direction.

C-2022-1-Jacob-02-EN-klein

Figure 2. Major components of the ICF. [Click on the image for a larger image]

1. Typology of the PDC

It may sound a bit old-fashioned, but Starreveld has helped us again after all those years ([Berg20], [Leeu14]). The typology of the business functions of the PDC is diverse and the nature of the control is therefore also diverse: from the processing of salaries to fleet management, the logistics of weapons, the development and the management of police-specific IT systems and the production of a TV programs. We sketch the PDC in its context, the typology of the various business functions, the culture, the tension between going concern and renewal, the strategic developments, and the developments in the area of IT.

2. Planning & Control

For the PDC, the Planning & Control cycle is the core of Internal Control. From budget to annual report. Our cycle is anchored in that of the force as a whole and the annual reports connect to the further development of our strategic vision. Especially the preparing of the annual plan of the PDC is extremely complex, where all portfolio plans of the operational portfolios come together with a need for going concern. Renewal of the police force namely works along the line of portfolio management and practically every renewal in our operational process impacts the business processes of the PDC. Insight into that impact prior to the annual plan process is essential. Over the past years, the Planning & Control organization of the PDC has been busy with a catching up a relatively large backlog in the area of professionalization, organizational design, and staffing.

3. Risk management

Risk-based working is anchored in the genes of operational police work. That’s what we do. But risk management in the supporting business has been left behind corps-wide. That is why in our framework explicit attention has been given to the risk management process, the risk-based working, the typology of risks, the method of assessing risks, our risk appetite, and the different roles in the risk management.

4. Process control

From the start in 2012, the PDC has been organized in business operation columns. Each department (HRM, Finance, Facility Management, etc.) had the primary task to get its own processes in order. However, many processes no longer take place in only one department these days. We distinguish the well-known chains such as Procurement to Pay (P2P) and also Police specific chains such as Competent to Skilled and Equipped when it comes to means of violence (standard weapon, baton, pepper spray, etc.). Managing the improvement of the PDC-wide workflows is becoming more and more important. In our ICF we pay attention to het way in which we do so, for which workflows and what are the roles of workflow owner and work process owner.

5. Controls in the processes

Especially for our service, this is the most important part of the framework. We describe our system of controls in the processes that underlie the products and services of the PDC. We record our processes under architecture in BizzDesign. The aim is also to record all key controls of the main processes under architecture. We have started with recording all key controls focused on financial accountability and now we expand that to all processes that are primarily directed to the delivery of our products and services. Regretfully, BizzDesign does not contain a Governance, Risk & Compliance (GRC) module. We are still struggling with that; the way in which we need to model the key controls is not optimal yet. In the ICF we explain the coherence between process control, quality assurance, quality measurement and service, because there is a lot of overlap there. Both worlds contain methods and techniques for the controlled delivery of products and services that comply with predefined KPIs. We strive to make them come together and to integrate them in the organization, if only by creating one language. Finally, using the models developed by Quinn and Cameron ([Came14]) and of prof. Muel Kaptein ([Kapt03]; see also [Bast15]), we give an outline of the influence of culture and behavior on our internal control. Quin’s model helped us to design controls and to implement them, matching our dominant culture. The Soft Controls model of Kaptein helped us to gain insight in the potential effect of soft control instruments to the desired behavior of employees.

6. Complying with laws and regulations

The PDC has to do with a large diversity of laws and regulations. Whether it concerns all regulations in the area of working conditions, financial management, privacy or the Weapons and Ammunition Act, the ICF provides general tools on how to deal with this. The ICF does not contain concrete integration of all laws and regulations, it is too generic for that.

7. Improvement cycles

The police invests heavily in being a learning organization. This is necessary to achieve the goals. To this end, multiple improvement cycles have been designed for this that are closely related and coordinated. Development on a personal level and within the team or the network structure is the basis of the improvement cycles. At sector level or service level, we work with a quality management system to test, measure and optimize the quality of the servicing. And our system of regular Control Self-Assessments (CSAs) is an important improvement tool (see Figure 3). As part of the two key questions of the CSA (“Are we doing the right things?” and “Are we doing things right?”), the following questions are being addressed:

  • Are all critical risks controlled sufficiently?
  • Are we adhering to criteria and norms?
  • Do we steer on the desired behavior-risk ratio?
  • Do we use the right management and accountability reports?

The CSA system is the basis for the annual In-Control Statement of the police, as recorded in our annual report. At the level of the PDC as a whole, the improvement of the servicing is secured in the Planning & Control cycle.

C-2022-1-Jacob-03-EN-klein

Figure 3. Objects of the Control Self-Assessment. [Click on the image for a larger image]

Necessary in-depth improvement of the IT organization

The police is increasingly becoming an IT-driven organization. The IT part in the total business and in the total police work increases every year. That justifies separate attention for the control in the area of IT. Our ICF is too generic in nature to be directly applicable to our IT organization. That is why we have developed a specific framework for the development of our police systems and why we have begun with the implementation of one framework for IT management processes. The goal is to identify and control the risks of the IT organization within the entire IT landscape (and the associated IT layers) by appropriate controls, considering applicable laws and regulations. As framework for the management of the IT processes the police has adopted the Government Information Security Baseline (Dutch: BIO). The BIO ([BIO20]) concerns a standardized framework based on the international ISO standards NEN-ISO/IEC 27001:2017 and NEN-ISO/IEC 27002:2017 for the Dutch government to protect all its information (systems). The BIO provides direction for the concretization of the ISO standards into concrete control measures.

KPMG best practices for BIO implementations at governmental organizations prove that the first step of the implementation roadmap includes the formulating of the objective by top management and the need to translate the degree of control into the annual internal control statement in the annual report, specifically for our IT processes. The desired scope of that internal control statement determines the scope of the implementation process, which is usually carried out in phases. After the scope has been determined, a link is made per IT process to the BIO security framework in order to take inventory which BIO security controls minimally need to be implemented.

Subsequently, a gap-analysis is used to assess to what extent the current controls are in line with the desired controls from the BIO Security standard. At organizations with a limited degree of maturity levels in the area of IT control, it often is a major challenge to map existing controls and the BIO Security standard in practice. In addition, we often see that the ownership of the controls and the responsibility for the testing thereof are not unambiguously designed and implemented. In order to considerably reduce the complexity of the mapping between existing controls and the complying with the BIO Security standard, a GRC tool can offer a solution. A precondition is to handle a simple structure in the arrangement of this tool in the recording and documentation of the control measure, including a linkage to relevant BIO and ISO standards, upon which the controls are based. This mapping is important because one control measure can affect multiple BIO and ISO standards.

Figure 4 shows in outline the KPMG approach of the BIO implementation; subdivided in five phases and five themes, of which the theme “control framework”, or rather the actual ICF for the IT management environment, is the most important theme.

C-2022-1-Jacob-04-EN-klein

Figure 4. Phased implementation method of the BIO. [Click on the image for a larger image]

The implementation of the BIO is a complex process, the duration of which is influenced by for example:

  • The desired scope of the roadmap (entire PDC, entire IT organization, specific IT parts, etc.)
  • The maturity of the IT environment and the quality of the design and implementation of the IT processes with the enclosed controls
  • The degree in which the existing controls already comply with the BIO Security standard and whether they are periodically tested
  • The knowledge of and experience with the design, implementation and testing of controls in general within the organization and especially the BIO Security standard.
  • The degree to which a contribution is made to the implementation roadmap, also outside the IT organization
  • the available resources ( staffing and budget).

In addition to the BIO, it is necessary to control specific IT risks. In strictly regulated organizations we see that specific internal control frameworks are being developed and implemented for certain IT processes and/or parts of the IT infrastructure, to control specific IT risks. For example, for the process of the transfer of changes to the product environment with deployment tooling, such as Azure DevOps or AWS or for that part of the IT network along which traffic in and out is regulated.

Conclusion: what is the contribution of an ICF to the Dutch police?

For the police the ICF is an instrument to take Internal Control to the next level. The Planning & Control organization offers the ICF as a menu list. There are large differences between and also within the departments of the PDC when the typology of the business functions is at stake and when it affects the degree of maturity in the area of internal control. Every PDC department chooses, also based on their own Control Self-Assessment, one or more subjects from the ICF with which they can further improve their internal control.

The ICF is a police-specific framework. Each of the parts need in-depth understanding. This is how the control of our processes truly gains meaning when the key controls have been accessibly recorded per business process. For our IT environment it is important that we further develop the ICF based on the BIO Security standard and make the link to the various layers in the IT infrastructure and in the specifically high-risk IT processes. In that way we keep on controlling the risks adequately.

For the executive management of the PDC, the ICF offers overview and insight, and it is an aid in the steering towards improvement of Internal Control. And, we now have a common language where internal control is concerned. This creates rest, regularity and (administrative) cleanliness.

References

[Bast15] Basten, A.R.J., Bekkum, E. van & Kuilman, S.A. (2015). Soft controls: IT General Controls 2.0. Compact 2015/1. Retrieved from: https://www.compact.nl/articles/soft-controls-it-general-controls-2-0/

[Berg20] Bergsma, J. & Leeuwen, O.C. van (2020). Bestuurlijke informatieverzorging: Typologieën (Management Information Systems: Typologies). Groningen: Noordhoff Uitgevers.

[BIO20] BIO (2020, 17 June). Baseline Informatiebeveiliging Overheid, versie 1.04zv (BIO Baseline Information Security Government version 1.04zv). Retrieved from: www.bio-overheid.nl.

[Came11] Cameron, K.S. & Quinn, R.E (2011). Diagnosing and changing organizational culture – Based on the competing values framework. Jossey Bass.

[Kapt03] Kaptein, M. & Kerklaan, V. (2003). Controlling the ‘soft controls’. Management Control & Accounting, 7(6), 8-13.

[Leeu14] Leeuwen, O.C. van & Bergsma, J. (2014). Bestuurlijke informatieverzorging: Algemene grondslagen Starreveld (Management Information Systems: General Principles). Groningen: Noordhoff.

[sJac21] s’Jacob, R.A. (2021). The Internal Control Framework PDC. [The ICF document is public and available upon request, only in Dutch.]

Implementing a new GRC solution

Managing risks, controls and compliance has become an integral part of the business operations of any organization. The intent to be demonstrably in control is in most cases on the agenda of the Board of Management. Depending on the business or market sector, pressure to comply or demonstrate control comes from internal stakeholders as well as external stakeholders as regulators, shareholders or external auditors ([Lamb17]). At the same time, there is the need to be cost efficient, reduce an increase in required effort for risk management and compliance, or even reduce the cost of control. In this context GRC (Governance, Risk & Compliance) tooling and platforms are relevant: These revolve around the automation of managing internal control and risks and complying with regulations. Implementing these and achieving the intended benefits can be a challenge. This article gives an overview of the lessons we learned during years of implementing GRC solutions.

Introduction to GRC

To start off, it’s important to understand Governance, Risk & Compliance terminology, why there is a need to automate and in what way the solutions on the market support this need. The GRC concept aims to support:

  • Governance of the organization: the management and monitoring of policies, procedures and measures to enable the organization to function in accordance with its objectives.
  • Risk management: the methodologies and procedures aimed at identifying and qualifying risks and implementing and monitoring measures to mitigate these risks.
  • Compliance: working in compliance with applicable laws and regulations.

There may be multiple reasons to start an implementation project. From practical experience, we know that the following arguments are important drivers:

  • The playing field of GRC expands as a result of increasing regulations, requiring (additional) IT support. Think of the examples in the area of privacy and information security.
  • The execution of control, risk or compliance activities takes place in silos as a result of the organizational structure. This can lead to fragmented, ineffective or duplicated control or compliance measures and difficulty to pinpoint the weak spots in the GRC area within the organization. The current way of conducting GRC activities is supported by an obsolete GRC solution or (worst case) by spreadsheets and email, making it labor intensive to perform activities and a nightmare to report on.
  • The (future) effort that is spent on GRC activities is mainly related to the hours that employees spend on managing GRC activities to identify issues instead of resolving these. This usually is a reason to look for automation to replace expensive labor.

Functionality offered by GRC solutions

In its simplest form, a GRC solution is a database or document archive connected to a workflow engine and reporting capabilities, as a cloud application or on-premise. In the most extensive form, the required functionality is delivered as part of a platform solution that provides capabilities for all processes concerning (supplier) risk management, implementation of control measures and compliance activities. Mobile integration and out-of-the-box data integration capabilities can be included. Many providers offer IT solutions that support the various use cases in the GRC area. These can be grouped in the following categories:

  • Policy and regulations management: maintaining and periodically reviewing internal or external policies or regulations, managing deviations, and identifying whether new regulations might be applicable. Some providers offer (connections to) regulatory change identification.
  • Enterprise, operational or IT risk management: identifying and managing risks and as a result reported issues and actions. These risks can arise on enterprise level; be non-financial (Operational risk) or are focused on IT topics (IT Risk).
  • Vendor risk management: this discipline focuses on identifying and mitigating risks for processes outsource to suppliers and third parties, trying to prevent that for example the use of (IT) service providers creates unacceptable risks for the business.
  • Privacy risk management: focused on the risks of processing data and protection of privacy. The registration of this type of risks can require additional measures as these risk and possible associated issues can be sensitive in nature requiring restricted access for risk officers.
  • Access risk management: managing risks of granting (critical) access to applications and data. Setting up baselines of critical functionality and segregation of duties and workflows to support the day-to-day addition or removal of users is usually part of this solution.
  • Continuous monitoring: using structured data (e.g. ERP transactions) to analyze transactional or configuration settings to identify and follow up control exceptions.
  • Audit management: Planning, staffing and documenting the internal audit engagements that are conducted within an organization. Integrated GRC tooling often offers functionality that reuses information stored elsewhere in the GRC solution or platform, enabling efficient and focused execution of audits.

All these topics may require a different way of managing risks: input data can be different and as a result the performance of measures can be more or less automated. They have in common that workflows to support the activities and reporting/dashboarding to enable adequate monitoring of status and results are required by most users of the solution.

Lessons learned (the eight most valuable tips)

When a suitable GRC solution has been selected based on the requirements from the users, it has to be implemented and adopted by the organization to enable the benefits as desired. The technical design and implementation of the solution are important parts of these projects, but there’s more to it than just that …

There are many lessons to be learned from GRC implementation projects which apply to the system integrator, the business integrator and the customer. In the remainder of this article we will describe some of the key lessons (pitfalls) of GRC projects we have observed.

Nr Key lessons
1 Well-defined GRC roadmap & phased approach
2 Try to stick to standard out of the box functionality
3 A common language
4 The importance of a design authority
5 Garbage in is garbage out
6 Importance of business involvement
7 More than a technology implementation
8 Business implementation as the key success factor

Lesson 1: GRC roadmap & phased approach

GRC applications often provide a broad range of functionalities which are interesting to different parts of the organization. Think about functionalities for risk & controls testing, audit management, IT controls, third-party risk management and policy management. Different departments may also start to show an interest in the functionalities that are provided by a GRC solution. When planning for an implementation of GRC software, it is recommended that the organization first makes a GRC strategy and GRC roadmap. The GRC Strategy and Roadmap is often initiated by a second line of control, which of course is recommended if various functions in the organization are involved in the development of this GRC strategy and roadmap. Functions that can be involved are for example compliance, information security, risk management and internal audit.

GRC solutions provide functionality for many use cases. Develop a roadmap to implement these functionalities one by one based on requirements.

The GRC roadmap can be used to prioritize requests of the organization and determine when a specific capability can be implemented in the GRC solution. Furthermore, it is recommended to define a very clear scope of the GRC project and not to try to implement all functionalities simultaneously. The different functionalities to be implemented will have impact on the data objects (like risk, control or issue) in the system. Implementing too many different functionalities simultaneously can paralyze the design of these objects, waiting for each other to be finished. A more phased approach (agile) will allow an organization to get to a steadier state sooner which then can be extended with additional functionalities.

Lesson 2: Stick to the standard

Most GRC solutions provide out-of-the-box functionalities for GRC. This standard out-of-the-box functionality is clustered in use cases like SOX, policy management, audit management and third-party risk management. The out-of-the-box functionality consists of predefined data objects for risks, control objectives, controls, entities and so on. In these data objects, standard fields and field attributes are available, which an organization can use in its solution. Additionally, the GRC vendors provide preconfigured workflows that can often be easily adjusted by activating or deactivating a review step. These standard out-of-the-box functionalities should be used as a reference where minor tweaks to the standard should be allowed and which will accelerate the implementation of the GRC solution.

Most GRC solutions provide out-of-the-box functionalities. Finetuning to meet the organizations requirements will speed up an implementation project. Do not start from scratch.

Organizations should limit customization to the standard configuration of the application. If customers decide to make a lot of changes to the standard functionality, it has an immediate impact on the complete project timeline and implementation budget required. More time will be required to prepare the design of the application, to configure and customize the application, and to test the application. Additionally, and depending on the GRC solution, a possible future upgrade of the system might be more complex and therefore more time-consuming and might not always fit in the roadmap of the GRC vendor which could result in future additional efforts. Therefore, it is always recommended to stay close to the functionality which is provided by off-the-shelf software or SaaS, and to try to prevent custom development (custom coding) as much as possible.

Lesson 3: A common language

In addition to the lessons above, it is important to mention that everyone involved in a GRC solution implementation project should have a common and shared understanding of the functionality and scope that will be implemented. It might sound obvious, but in too many cases projects fail due to a lack of shared GRC terminology like risk, event, control and issue and how these are connected.

Different department or functions within an organization might have a different understanding of a risk, or a risk event or an issue (which could be a risk). A common and shared terminology and a shared definition of how to document these (data quality) will improve the language used within an organization.

Develop standard definitions for the key data objects in GRC. This will facilitate a common language of GRC in your organization.

It goes without saying that communication in such projects is key. Already from the very first step of the project everyone should be on the same page to eliminate any ambiguities regarding the terminology used. Is there a shared foundation for the risk function? When each risk function within the organization is managing risks in their own manner, using stand-alone solutions and creating analytical insights from different data sources, it’s very difficult to share a common risk insight as none of the risk functions speak in the same terminology.

To prevent this from causing a complete project failure a common risk taxonomy could help everyone to think, prioritize and communicate about risks in the same way. If this is not in place key risk indicators could be interpreted in different ways causing confusion in required follow-up or the actual risk a company is facing.

The fact that the organization is already considering the implementation of a GRC solution helps of course to get everyone on the same level of understanding. One of the objectives of the risk function is to at least align to the corporate-wide digital transformation goals of the organization. The risk function needs to define an ambition that support the business and yet maintain the objectives and KPIs of a risk function.

Lesson 4: The importance of a design authority

As mentioned before, a GRC application can be used by various departments or functions within an organization. And all the stakeholders of these department will have a different view on risk, controls, issues and actions as mentioned in [Beug10]. They might be afraid to lose decision making power & autonomy if they need to make use of an integrated risk management solution.

For a project team implementing the GRC solution, it can be very difficult to navigate and get alignment across all these departments and functions in an efficient way as all will have their own view and opinion on how the GRC application should be configured. Getting alignment on how the system should be designed and configured can become cumbersome and time-consuming which will have impact on project timelines. Furthermore, previously made decisions might get questioned over and over by other departments or functions.

A design authority empowered to make the design decisions on behalf the organization will have a positive impact on designing the application.

Therefore, it is recommended to have an overall design authority in the project that is empowered to take the decisions regarding the roadmap of the project and the design and configuration of the GRC application. This person, often a senior stakeholder in a compliance or risk management function, should have a view of the overall requirements of the various departments and should be authorized to make the overall design decisions for the project. This will result in swift decision making and will have a positive impact on project timelines.

Lesson 5: Garbage in is garbage out

One of use cases that is frequently used by organizations is “management of internal controls” (which for example can be the IT, SOX or financial controls). In this use case a business entity hierarch is created in the GRC application. As a second step the business process, risks and controls (and possibly other data) are uploaded in the GRC application and assigned to the entities for which these processes, risks and controls are applicable.

The master data to be uploaded in the GRC application is one of the key components of GRC system implementation of the system ([Kimb17]) but also an activity which can be very complex and time-consuming due to the number of risk and controls as well as the possible localization effort involved.

When (master) data management is not well defined or set up correctly and according to the company needs, there could be an impact on reporting and efficiency of the functionalities that are used. If framework integration is not performed properly this could even lead to duplicate controls being tested.

One of the key objectives of implementing a GRC solution is often to make risk & compliance processes more efficient by taking out inefficiencies or manual steps in for example the Risk & Control Self-Assessment (RCSA) process or control testing processes. Often quite some inefficiencies lie in the risk and control framework that are uploaded in the GRC environment. These risk and control frameworks might have been developed quite a few years ago and could include duplicate risk and controls, localized controls or primarily manual controls or might be missing important risk and controls due to a changed (regulatory) environment. Also issues with reporting on risk and controls might even be caused by the existing risk and control framework when no standard naming conventions are applied or when a central standardized risk and control library is not available. If these existing risk and control frameworks are implemented like for like in the GRC application the inefficiencies still remain.

Improve the quality of your risk and control framework before implementing a GRC solution.

When organizations are considering implementing a new GRC platform, it might be worthwhile to also reconsider the existing internal control framework for a couple of reasons:

  1. Control framework integration: often different departments or functions within an organization will be interested to make use of the GRC applications. Therefore, there will be a shared internal control framework which might have duplications or overlaps. It is therefore important to harmonize control frameworks and to remove any duplicate risks and controls. The recommended starting point here is a risk assessment which focuses on key risks in processes, for example.
  2. Control framework transformation: Some risk and control frameworks might be somewhat older and would only have a focus on manual controls. The integrated control framework would allow the possibility for organizations to identify controls which are embedded within applications like segregation of duty controls or configuration controls.
  3. Automation: GRC applications often provide Continuous Control monitoring (CCM) functionality or will have this functionality on their short-term roadmap. Therefore it would be possible to identify controls in the control framework which have the potential to be (partly) automated (assessment, testing) via continuous control monitoring functionality. Especially when an organization has multiple ERP applications this might become relevant as the business case for CCM becomes more interesting.

It is recommended to perform the improvement activities regarding the risk and control framework before the actual implementation of the GRC application, as this becomes important input for the GRC application. This would obviously prevent work duplication as uploading the risk and control framework in the GRC application and assigning the risk and controls to the relevant business units and control owners can be a time-consuming task (especially when many business entities are involved and some control localization work would be required).

Lesson 6: Not enough senior management involvement

The lack of senior management involvement and their sponsorship has proven fatal for many GRC implementation projects. Without their sponsorship, the end user community might not feel committed to the new system and the new way of working, and many may even be hostile against it. It is therefore paramount that management and end users are involved when the GRC project commences. At the start of the project or even before the project kicks off, stakeholders should be informed about the introduction of the GRC solution. The best way to do so, is to show them the solution and how it will positively impact their way of working. Once the solution has been shown, they should be allowed to raise all their questions and remarks that can be directly addressed. The business stakeholders could then leave that very first meeting on a positive note and spread the word to the rest of the organization.

Make sure the business understands the importance of a GRC project to meet its strategic objectives. Senior management involvement is key to the successful implementation of GRC.

Also throughout the duration of the project, the business should be kept involved with activities regarding the design principles, testing and training. Management should continuously and openly support the GRC implementation to emphasize its advantages and the priority of the project. There is also the risk of losing project resources if the priority of the project is not emphasized enough by senior management.

To increase the level of involvement communication about the project is essential. The project manager should create a clear communication plan, announce the project to the business and clarify what it will mean for them with a focus on the advantages of the GRC solution, and report the project status periodically to the stakeholders.

KPMG’s Five Steps for tackling culture ([KPMG16]) framework could also help with the approach of GRC solution implementations as it focuses on the organizational and culture changes as well.

Lesson 7. More than a technology implementation – the target operating model

Many organizations still consider the implementation of a GRC application as an implementation of a tool. These organizations completely focus on the design and implementation of the application itself: the technology component. Often these projects are not successful because a standalone solution was implemented.

C-2022-1-Hallemeesch-01-klein

Figure 1. Target operating model. [Click on the image for a larger image]

When implementing a GRC solution, it is recommended to focus the following components of the target operating model for GRC (or Risk). The components of the target operating model are:

  • The functional processes: overview of the functional processes in and around the GRC applications. These are processes covered by the GRC application (like performing a risk assessment) but could also be processes outside a GRC solution (for example establishing a risk appetite). It is recommended to document the broader picture of GRC with a focus on the different GRC capabilities in an organization (like ERM, policy management, internal control, audit management and for example third-party risk. This process overview will provide the organization with detailed information on the existing risk processes, which may be included in the GRC solution (and are input to the GRC roadmap.
  • People: when the processes have been elaborated, it is possible to designate the relevant roles to processes and process steps. This is valuable information for the change management workstream. Based on the identified roles, different training and reporting lines can be described

Developing a comprehensive target operating model for GRC will make sure that all requirements regarding processes that are relevant for GRC are documented, defined and implemented in the organization.. Not only the processes supported by the GRC application.

  • The same process model can be used to describe which activities are performed where in the organization (service delivery model). Certain processes might be performed on a central level in an expertise center (like maintaining the central organizational hierarchy and risk and control frameworks) and other processes might be performed locally (like assessing a control by a local control owners. Documenting the service delivery model provides interesting information, especially when parts of the organization are performing or testing controls on behalf of other parts of the organization.
  • The technology part of the model of course is related to the implementation of the GRC application. It is possible to make a link with the process model to identify the processes which are not or not yet supported by the GRC tool.
  • Performance and insight: often forgotten during an implementation of a GRC tool. But is very important to think upfront about the information that the organization would like to get out of the solution. If this is not taken into consideration when designing the application and the data to be uploaded and assigned in the system, there is a reasonable chance that not all relevant reporting requirements can be met via the solution (slice and dice).
  • Governance: it is important to define the governance model for the solution. An example that we often see is that the solution is configured, but there is no process in place regarding possible changes to the tool or to request possible enhancements. The controls concerning the GRC tool are also not documented and performed. We have seen too many systems where workflows are put in place to control owners to assess controls, but there is no real monitoring if workflows are actually followed up and closed in the system, or that workflows have been planned accurately (and completely).

Lesson 8: Last but not least: business implementation as the key success factor

Where lesson 6 focused on the more top-down involvement of senior management and sponsorship, it is also important to focus on the business and end users as they will be working with the newly implemented GRC solution. The implementation of a GRC solution contains a technical aspect where an IT system is to be designed and implemented, a repository of risks and controls is to be set up and reports are to be developed. However, the technical implementation alone doesn’t make the GRC solution a success. The solution is to be used by the business and if they are not on board with the project it can be seen as a fail. Especially because executing control activities is often seen as a burden to people in the business (1st line).

A key component of a GRC implementation is business implementation. Make sure that the business accepts the solution and feels comfortable to identify new risks, raise issues with controls or proactively raise other issues or deficiencies. Only then will the organization reap the benefits of GRC.

The introduction of a GRC solution also means a new way of working. Often in the rush to get a GRC implementation project going, management jumps straight into the technical implementation without thinking of organizational changes that need to take please as well. There is no cutting corners in this; eventually the business needs to be on board with the GRC solution in order to make it a success. Make sure end users are involved at every phase of the project, especially with the design, testing and training. When end users are involved in the design phase, they will feel a sense of ownership as they are asked about the features and functionalities of the solution. During testing, the business will be responsible for accepting the developed solution for the design that they have helped set up. Through training, they will become knowledgeable about the solution and the new way of working. Enabling key users to facilitate end user training sessions (train the trainer) will increase the sense of ownership even more within the organization, lowering the barrier for end users to reach out with questions on how to use the solution and its new process.

Besides training the application, the organization should also train the organization in why the application is being implemented (importance of compliance, importance of being in control, importance of doing business with the right third parties and so on) but also what is expected of the people making use of the GRC solution. Business users should be encouraged to do the right thing. Stating that a design, implementation or operating effectiveness of a control is NOT adequate should be fine. This will allow the organization to further improve their internal control environment. Raising an issue in a GRC solution will also allow a company to further strengthen its control environment. If users just close workflows to get them off their worklist then there is limited benefit to the GRC tool. As an auditor we have seen many examples in GRC tooling where controls were only rates as done or completed or “risk identified but mitigated”. These kinds of comments just raise more questions and the GRC application will have limited benefit to the organization. If control owners just put in the rating effective as they are afraid that a control cannot be rated as ineffective, then the GRC solution also has limited benefit.

Besides the technical component, users should be trained on what is expected. What is the expected evidence of a control execution or testing, what is the rationale behind the answers in an RCSA questionnaire? What does good test evidence look like? If the users of a GRC solution understand why they are using the software and what kind of input is expected in the GRC tool then the organization will benefit from the solution. The business implementation component is therefore the key success factor in implementing a GRC solution.

Conclusion

GRC projects can become very complex and long running projects for organizations but there are learnings from other projects that have a positive impact on these projects. Lessons learned, which if applied during an implementation will allow a GRC project to run smoother, are not very different than lessons learned from other IT projects. The business implementation and business involvement in a GRC project is the key success factor of implementing a GRC solution. This is the workflow that will make sure that business users adapt the GRC solution and will make use of it as it is intended: a key component of the internal control environment of the organization.

References

[Beug10] Beugelaar B.. et al. (2010). Geslaagd GRC binnen Handbereik. Compact 2010/1. Retrieved from: https://www.compact.nl/articles/geslaagd-grc-binnen-handbereik/

[Kimb17] Kimball, D.A et al. (2017). A practical view on SAP Process Control. Compact 2017/4. Retrieved from: https://www.compact.nl/articles/a-practical-view-on-sap-process-control

[KPMG16] KPMG (2016). Five steps to tackling culture. Retrieved from: https://assets.kpmg/content/dam/kpmg/co/pdf/co-17-01-09-hc-five-steps-to-tackling-culture.pdf

[Lamb17]Lamberiks, G.J.L. et al. (2017). Trending Topics in GRC tooling. Compact 2017/3. Retrieved from: https://www.compact.nl/articles/trending-topics-in-grc-tooling

Mastering the ESG reporting and data challenges

Companies are struggling how to measure and report on their Environmental, Social, and Governance (ESG) performance. How well a company is performing on ESG aspects is becoming more important for investors, consumers, employees, and business partners and therefore management. This article tries to shed a light on how companies can overcome ESG reporting (data) challenges. A nine-step structured approach is introduced to give companies guidance on how to tackle the ESG reporting (data) challenges.

Introduction

Environmental, Social and Governance (ESG) aspects of organizations are important non-financial reporting topics. Organizations struggle with how to measure ESG metrics and how to report on their ESG performance and priorities. Many organizations haven’t yet defined a corporate-wide reporting strategy for ESG as part of their overall strategy. Other organizations are already committed to ESG reporting and are struggling to put programs into place to measure ESG metrics and to steer their business as it is not yet part of their overall heartbeat. Currently, most CEOs are weathering the COVID storm and are managing their organization’s performance by trying to outperform their financial targets. Besides the causality between the climate developments in our weather; from a sustainability perspective the waves are becoming higher, the storm is increasing rapidly as ESG is becoming the new standard to evaluate an organization’s performance.

How well a company performs on ESG aspects is becoming an increasingly important performance metric for investors, consumers, employees, business partners and therefore management. Next to performance, information about an organizations’ ESG metrics is also requested by regulators.

Investors are demanding ESG performance insights. They believe that organizations with a strong ESG program perform better and are more stable. On the other hand, poor ESG performance poses environmental, social, and reputational risks that can damage the company’s performance.

Consumers want to increasingly buy from organizations that are environmentally sustainable, demonstrate good governance, and take a stand on social justice issues. And they are even willing to pay a premium to support organizations with a better ESG score.

Globally, we are seeing a war on talent, with new recruits and young professionals looking for organizations that have a positive impact on ESG aspects because that is what most appeals to them and what they would like to contribute to. Companies that take ESG seriously will be ranked at the top of the best places to work for and will find it easier to retain and hire the best employees.

Across the value chain, organizations will select business partners that are for example most sustainable and are reducing the overall carbon footprint of the entire value chain. Business partners solely focusing on creating their value based on the lowest costs will be re-evaluated because of ESG. Organizations that will not contribute to a sustainable value chain can find difficulties in continuing their business in the future.

The ESG KPIs are only the tip of the iceberg

The actual reporting on ESG key performance indicators (KPIs) is often only a small step in an extensive process. All facets can be compared to an iceberg, where only certain things are visible to stakeholders – the “tip of the iceberg”: the ESG KPIs or report in this case. What is underneath the water, however, is where the challenges arise. The real challenge of ESG reporting is a complex variety of people, processes, data and systems aspects which need to be taken into account.

C-2022-1-Delft-01-klein

Figure 1. Overview of aspects related to ESG reporting. [Click on the image for a larger image]

In this article, we will first further introduce ESG reporting including the required insights of the ESG stakeholders. After this we will elaborate more on the ESG data challenges, and we will conclude with a nine-step structured approach how to master the reporting and data challenges covering the “below the waterline” aspects related to ESG reporting.

ESG is at the forefront of the CFO agenda

The rise in the recognition of ESG as a major factor within regulation, capital markets and media discourse has led CFOs to rethink how they measure and report on ESG aspects for their organization.

Finance is ideally positioned in the organization to track the data needed for ESG strategies and reporting. Finance also works across functions and business units, and is in a position to lead an organization’s ESG reporting and data management program. The (financial) business planning and analysis organization can connect ESG information, drive insights, and report on progress. Finance has the existing discipline, governance and controls to leverage on the required collation, analysis and reporting of data with regard to ESG. Therefore, we generally see ESG as an addition to the CFO agenda.

ESG as part of the “heartbeat” of an organization

Embedding ESG is not solely focused on the production of new non-financial report. It is also about understanding the drivers of value creation within the organization and enabling business insights and manage sustainable growth over time. Embedding ESG within an organization should impact decision-making and for example capital allocation.

The following aspects are therefore eminent to secure ESG as part of the company’s heartbeat:

  • Alignment of an organization’s purpose, strategy, KPIs and outcomes across financial and non-financial performance.
  • Ability to set ESG targets and financial performance and track yearly/monthly performance with drill downs, target vs actual and comparison across dimensions (i.e. region, site, product type).
  • Automated integration of performance data to complete full narrative disclosures for internal and external reporting and short-term managerial dashboards.

Embedding ESG into core performance management practices is about integrating ESG across the end-to-end process – from target setting to budgeting, through to internal and external reporting to ensure alignment between financial and non-financial performance management.

An important first step is related to articulate the strategy which is about translating the strategic vision of the organization into clear measures and targets to focus on executing the strategy and achieve business outcomes. ESG should be part of the purpose of the organization and integrated into the overall strategy of an organization. In order to achieve this, organizations need to understand ESG and the impact of the broad ESG agenda on their business and environment. They need to investigate which ESG elements are most important for them and these should be incorporated into the overall strategic vision.

Many organizations still run their business using legacy KPIs, or “industry standard” KPIs, which can allow them to run the business in a controlled manner. Conversely, this is not necessarily contributing to the strategic position that the organization is aiming for. These KPI measures are not just financial but look at the organization as a whole. Although the strategy is generally focused on growing shareholder value and profits, the non-financial and ESG measures underpin these goals, from customer through to operations and people/culture to relevant ESG topics.

The definition of the KPIs is critical to ensure linkage to underlying drivers of value and to ensure business units are able to focus on strategically aligned core targets to drive business outcomes. When an organization has (re-)articulated its strategy and included ESG strategic objectives the next step is to embed it into its planning and control cycle to deliver decision support.

In addition to defining the right ESG metrics to evaluate the organizational performance, organizations struggle with unlocking the ESG relevant data.

Data is at the base of all reports

With a clear view of the ESG reporting and KPIs, it is time to highlight the raw material required which is deep below sea level: data. Data is sometimes referred to as the new oil, or organizations’ most valuable asset. But most organizations do not manage data as it was an asset; not in the way they would do for their buildings, cash, customers and for example their employees.

ESG reporting is even more complex than “classic” non-financial reporting

A first challenge with regard to ESG data is the lack of a standardized approach in ESG reporting. Frameworks and standards have been established to report on ESG topics like sustainability. For example, the Global Reporting Initiative (GRI) and Sustainability Accounting Standards Board’s (SASB) which is widely used in financial services organizations. However, these standards are self-regulatory and lack universal reporting metrics and therefore a universal data need.

Even if there is one global standard in place, companies would still face challenges when it comes to finding the right data whereas data originates from various parts of the organization like the supply chain, human resources but also from external vendors and customers ([Capl21]). The absence of standard approaches leads to lack of comparability among companies’ reports and confusion among companies on which standard to choose. The KPI definition must be clear in order to define the data needed.

In April 2021, the European Commission adopted a proposal for a Corporate Sustainability Reporting Directive (CSRD) which radically improves the existing reporting requirements of the EU’s Non-Financial Reporting Directive (NFRD).

Besides a lack of a standardized approach, more data challenges on ESG reporting arise:

  • ESG KPIs often require data which isn’t managed till now. Financial data is documented, has an owner, has data lifecycle management processes and tooling but ESG data mostly doesn’t. This affects the overall data quality, for example.
  • Required data is not available. As a consequence, the required data needs to be recorded, if possible reconstructed or sourced from a data provider.
  • Data collectors and providers’ outputs are unverified and inconsistent which could affect the data quality.
  • Processing the data and providing the ESG output is relatively new compared to financial reporting and is in many occasions based on End User Computing tools like Access and Excel which could lead to inconsistent handling of data and errors.
  • The ESG topic is not only about the environment. The challenge is that a company may need different solutions for different data sources (e.g. CR360 or Enablon for site-based reporting (HSE) and another for HR data, etc.).

Requirements like CSRD make it clearer for organizations what to report on but at the same time, it is sometimes not clear to companies how the road from data to report is laid out. Looking at these data challenges mentioned above, it is also important for organizations to structure a solid approach on how to tackle the ESG challenges which will be introduced in the next paragraph.

A structured approach to deal with ESG reporting challenges

The required “below the waterline” activities can be summarized in nine sequential steps to structurally approach these ESG challenges. Using a designed approach does not cater for all but will be a basis for developing the right capabilities and to move in the right direction.

C-2022-1-Delft-02-klein

Figure 2. ESG “below the waterline” steps. [Click on the image for a larger image]

This approach consists of nine sequential steps or questions covering the People, Processes, Data and Source systems & tooling facets of the introduced iceberg concept. The “tip of the iceberg” aspects with regard to defining and reporting the required KPI were discussed in the previous paragraphs. Let’s go through the steps one by one.

  1. Who is the ESG KPI owner? Ownership is one of the most important measures in managing assets. The targets and related KPIs are generally designated to specific departments and progress is measured using a set of KPIs. When we look at ESG reporting, this designating is often less clear. Having a clear understanding of which department or role is responsible for a target also leads to a KPI owner. It is often challenging to identify the KPI owner since it can be vague who is responsible for the KPI. A KPI owner has multiple responsibilities. First and foremost, the owner is responsible for defining the KPI. Second, a KPI owner is an important role in the change management process. Guarding consistency is a key aspect, as reports often look at multiple moments in time. It is important that when two timeframes are compared, the same measurement is used to say something about a trend.
  2. How is the KPI calculated? Once it is known who is responsible for a KPI, a clear definition of how the KPI is calculated should be formulated and approved by the owner. This demands a good understanding of what is measured, but more importantly how it is measured. Setting definitions should follow a structured process including logging the KPI and managing changes to the KPI, for example in a KPI rulebook.
  3. Which data is required for the calculation? A calculation consists of multiple parts that all have their own data sources and data types. An example calculation of CO2 emission per employee needs to look at emission data, as well as HR data. More often than not, these data sources all have a different update frequency and many ways of registering. In addition to the difference in data types, data quality is always a challenge. This also starts with ownership. All important data should have an owner who is, again, responsible for setting the data definition and to improve and uphold the data quality. Without proper data management measures in place ([Jonk11]), the data quality cannot be measured and improved which has a direct impact on the quality of the KPI outcome.
  4. Is the data available and where is it located? Knowing which data is needed brings an organization to the next challenge: is the data actually available? Next to the availability, the granularity of the data is an important aspect to keep in mind. Is the right level of detail of the data available, for example per department or product, to provide the required insights. A strict data definition is essential in this quest.
  5. Can the data be sourced? If the data is not available, it should be built or sourced. An organization can start registering the data itself or the data can be sourced to third parties. Having KPI and data definitions available is essential in order to set the right requirements when sourcing the data. Creating (custom) tooling or purchasing third-party tooling to register own or sourced data is a related challenge. It is expected that more and more ESG specific data solutions will enter the market in the coming years.
  6. Can the data connection be built? Nowadays, a lot of (ERP) systems have integrated connectivity as a standard, this is not a given fact for many systems, however. Therefore, it is relevant to investigate how the data can be retrieved. Data connections can have many forms and frequencies like streaming, batch, or ad-hoc. Dependent on the type of connection, structured access to the data should be arranged.
  7. Is the data of proper quality? If the right data is available, the proper quality can be determined in which the data definition is the basis. Based on data quality rules for example for the required syntax (for example: should it be a number or a date) the data quality can be measured and improved. Data quality standards and other measures should be made available within the organization in a consistent way in which again the data owner plays an important part.
  8. Can the logic be built? Building reports and dashboards require a structured process in which the requirements are defined, the logic is built in a consistent and maintainable way and the right tooling is deployed. In this step the available data is combined in order to create the KPI based on the KPI definition in which the KPI owner has the final approval of the outcome.
  9. Is the user knowledgeable to use the KPI? Reporting the KPI is not a goal in itself. It is important that the user of the KPI is knowledgeable enough to interpret the KPI in conjunction with other information and its development over time to define actions and adjust the course of the organization if needed.

Based on this nine-step approach, the company will have a clear view of all the challenges of the iceberg and required steps that need to be taken to be able to report and steer on ESG. The challenges can be divers starting from defining the KPIs, the tooling and sourcing of the data or data management. Structuring the approach helps the organization for now and going forward, whereas the generic consensus is that the reporting and therefore data requirements will only grow.

Conclusion

The demand to report on ESG aspects is diverse and growing. Governments, investors, consumers, employees, business partners and therefore management are all requesting insights into an organizations’ ESG metrics. It seems like the topic is on the agenda of every board meeting, as it should be. To be able to report on ESG-related topics, it is important to know what you want to measure, how/where/if the necessary data is registered and having a possible approach towards reporting. ESG KPIs cannot be a one-off whereas the scope for ESG reporting will only grow; the ESG journey has only just begun. And it is a journey that inspires to dig deeper into the subject and further mature for which a consistent approach is key.

The D&A Factory approach of KPMG ([Duij21]) provides a blueprint architecture to utilize the company’s data. KPMG’s proven Data & Analytics Factory combines all key elements of data & analytics (i.e., data strategy, data governance, (master) data management, data lakes, analytics and algorithms, visualizations, and reporting) to generate integrated insights like a streamlined factory. Insights that can be used in all layers of your organization: from small-scale optimizations to strategic decision-making. The modular and flexible structure of the factory also ensures optimum scalability and agility in response to changing organizational needs and market developments. In this way, KPMG ensures that organizations can industrialize their existing working methods and extract maximum business value from the available data.

References

[Capl21] Caplain, J. et al. (2021). Closing the disconnect in ESG data. KPMG International. Retrieved from: https://assets.kpmg/content/dam/kpmg/xx/pdf/2021/10/closing-the-disconnect-in-esg-data.pdf

[Duij21] Duijkers, R., Iersel, J. van, & Dvortsova, O. (2021). How to future proof your corporate tax compliance. Compact 2021/2. Retrieved from: https://www.compact.nl/articles/how-to-future-proof-your-corporate-tax-compliance/

[Jonk11] Jonker, R.A., Kooistra, F.T., Cepariu, D., Etten, J. van, & Swartjes, S. (2011). Effective Master Data Management. Compact 2011/0. Retrieved from: https://www.compact.nl/articles/effective-master-data-management/

Privacy audits

The importance of data privacy has increased incredibly in the last couple of years. With the introduction of the General Data Protection Regulation (GDPR), the importance of data privacy has increased even more. Data privacy is an important management aspect and contributes to sustainable investments. It should therefore take a prominent role in GRC efforts and ESG reporting. This article discusses the options to perform privacy audits and the relevancy of the outcomes.

Introduction

In recent years, there have been various developments with regard to data privacy. These developments, and especially the introduction of the General Data Protection Regulation (GDPR), forced organizations to become more aware of the way they process personal data. However, not just organizations have been confronted with these developments, individuals who entrust organizations with their data have also become more aware of the way their personal data is processed. Therefore, the need to demonstrate compliance with data privacy laws, regulations and other data privacy requirements has increased among organizations.

Since data privacy is an important management aspect and contributes to sustainable investments, it has taken a prominent role in Governance, Risk management & Compliance (GRC) efforts and Environmental, Social & Governance (ESG) reporting. GRC and ESG challenges organizations to approach the way they are dealing with personal data from different angels and the way they report on their performed efforts. However, because of the complexity of privacy laws and regulations and a lack of awareness, it seems to be quite a challenging task for organizations to demonstrate the adequacy of their privacy implementation. It seems that a lot can be gained when determining whether controls are suitably applied in this regard, since there are no straightforward methods that could be applied to provide insight. The poor state of awareness and knowledge on this topic makes this even more complicated.

This article explains the criticality of GDPR in obtaining compliance, followed by a description of the various ways in which privacy compliance reporting can be performed. In addition, the role of privacy audits, their value, and the relationship of privacy audits to GRC & ESG is explained, prior to providing some closing thoughts on the development of the sector. The key question in this article is whether privacy audits are relevant for GRC & ESG.

Criticality of the GDPR in obtaining compliancy

Although the GDPR has already been implemented in May 2018, it is still a huge challenge for organizations to cope with. This privacy regulation has not only resulted in the European Commission requiring organizations to prove their level of compliance, but it has also increased the interest from individuals on how their personal data is processed by organizations. The most important principles of the GDPR, as listed in article 5 are:

  1. Lawfulness, Fairness, and Transparency
  2. Limitations on Purposes of Collection, Processing & Storage
  3. Data Minimization
  4. Accuracy of Data
  5. Data Storage Limits and
  6. Integrity and Confidentiality

The rights that individuals have as data subjects are listed in Chapter 3 of the GDPR and are translated into requirements that should be met by organizations, such as:

  1. The right to be informed – organizations should be able to inform data subjects about how their data is collected, processed, stored (incl. for how long) and whether data is shared with other (third) parties.
  2. The right to access – organizations should be able to provide data subjects access to their data and give them insight in what personal data is processed by the organizations in question.
  3. The right to rectification – organizations must rectify personal data of subjects in case it is incorrect.
  4. The right to erasure/the right to be forgotten – in certain cases, such as when the data is processed unlawfully, the individual has the right to be forgotten which means that all personal data of the individual must be deleted by the data processor.
  5. The right to restrict processing – under certain circumstances, for example, when doubts arise about the accuracy of the data, the processing of personal data could be restricted by the data subject.

C-2022-1-Amaador-00a-klein

A starting point for any organization to determine whether and which privacy requirements are applicable to the organization is a clear view of the incoming and outcoming flows of data and the way the data is processed within and outside the organization. In case personal data is processed, an organization should have a processing register. Personal data hereby being defined as any data which can be related to natural persons. In addition, the organization should perform Data Privacy Impact Assessments (DPIAs) for projects that implement new information systems to process sensitive personal data and a high degree of privacy protection is needed.

The obligation to possess a data processing register and the obligation to set up DPIAs, ensure that the basic principles that are required by the privacy regulation for the processing of personal data (elaborated in Chapter 2 of the GDPR) and privacy control have the right scope. Furthermore, these principles ensure that processing of personal data by an organization is done in a legitimate, fair and transparent way. Organizations should hereby bear in mind that processing personal data is limited to the purpose for which the data has been obtained. All personal data that is requested should be linkable to the initial purpose. The latter has to do with data minimization, which is also one of the basic principles of the GDPR. Regarding the storage of data, organizations should ensure that data is no longer stored than is necessary. The personal data itself should also be accurate and must be handled with integrity and confidentiality.

Organizations are held accountable by the GDRP for demonstrating their compliance with applicable privacy regulations. The role of the Data Protection Officer (DPO) has increased considerably in this regard. The DPO is often seen as the first point of contact for data privacy within an organization. It is even mandatory to appoint a DPO in case the organization is a public authority or body. DPOs are appointed to fulfill several tasks such as informing and advising management and employees about data privacy regulations, monitor compliance with the GDPR and increase awareness with regard to data privacy by for example, introducing mandatory privacy awareness training programs.

Demonstration of compliance with privacy regulations could be quite challenging for organizations and especially for DPOs. Complying with privacy regulations has been outlined in article 42 of the GDPR. However, practice has shown that demonstrating compliance is more complex than is described in this article. At this moment the Dutch Authority of Personal Data (Autoriteit Persoonsgegevens), the Dutch accreditation council (Raad voor Accreditatie) and other regulators have not yet come to a practical approach for issuing certificates to organizations that meet the requirements, due to the elusive nature of the law article. Besides the certification approach foreseen in the GDPR, there are different approaches in the market which organizations can use to report on their privacy compliance. In the next section some of these reporting approaches are elaborated on.

Reporting on privacy compliance

There are different ways in which organizations can report on privacy. Of course there are self-assessments and advisory-based privacy reporting. These ways of reporting on privacy are mostly unstructured and the conclusions subjective, however. The reports therefore make it difficult to benchmark organizations against each other. To make privacy compliance more comparable and the results less questionable, there are broadly speaking two ways of more structured reporting in the Netherlands. These ways are reporting based on privacy assurance and reporting based on privacy certification. They are further explained in the following paragraphs of this section.

A. Reporting based on privacy assurance

C-2022-1-Amaador-00b-klein

Assurance engagements can be defined as assignments in which auditors give an independent third-party statement (“opinion”) on objects by testing suitable criteria. Assurance engagements are meant to instill confidence in the intended users. These engagements originate in the financial audit sector. How these engagements should be performed and reported are predefined by internationally accepted “Standaarden” (standards) respectively “Richtlijnen” (guidelines) and are propagated by the NBA and NOREA.1 As part of assurance engagements, controls are tested using auditing techniques consisting of the Test of Design (ToD) and/or Test of operating Effectiveness (ToE). Based on the results of controls testing, an opinion is given on the research objects. This opinion can be either qualified, unqualified, abstaining from judgment, or qualified with limitation. The most commonly used assurance “Standaarden” and “Richtlijnen” in the Netherlands to report on privacy are: ISAE 3000, SOC1 and SOC2. ISAE3000 is a generic standard for assurance on non-financial information. SOC1 is meant to report relevant non-financial control information for financial statement analysis purposes and SOC2 is set up for IT organizations that require assurance regarding security, availability, process integrity, confidentiality and privacy related controls. Assignments based on ISAE3000, SOC1 and SOC2 can lead to opinions on privacy control. The criteria chosen to be in scope as part of an ISAE3000 or SOC1 engagement can be chosen freely as long as the choice leads to a cohesive, clear and a usable result. The criteria for SOC2 are prescribed, although extension is possible.

C-2022-1-Amaador-00c-klein

NOREA gives organizations the possibility to obtain a Privacy Audit Proof quality mark for individual or multiple processing activities of personal data or for an entire organization ([NORE21]). This mark can be obtained based on a “ISAE3000” or “SOC2” privacy assurance report with an unqualified opinion. The NOREA Taskforce Privacy has set up terms in which guidelines for performing privacy assurance engagements and obtaining the Privacy Audit Proof quality mark. One of the conditions for this quality mark is the usage of the NOREA Privacy Control Framework (PCF) as a set of criteria, in case of the usage of the ISAE3000, or the usage of the criteria elaborated in the privacy paragraph of an SOC2-assurance report. The Privacy Audit Proof quality mark can be obtained by either controllers or processors. After handing over an unqualified assurance report and giving relevant information, NOREA gives permission to the organization that is being successfully audited to use this mark for one year, under certain conditions.

The extent to which an opinion on privacy control resulting from an assurance engagement is the same as an opinion on privacy compliance depends on the criteria in scope of the assurance engagement. An opinion on privacy controls, although a good indicator, can however never be seen as an all-encompassing compliance statement. Due to the fact that the GDPR is ambiguous and the selection of controls in scope requires interpretation, an objective opinion on compliance by financial or IT auditors is not possible.

B. Reporting based on privacy certification

Certification originates from quality control purposes. To be eligible for a certification, an independent, accredited party should assess whether the management system of the concerned organization meets all requirements of the standard. Certification audits are meant to making products and services comparable. In addition, the strive for continuous improvement is an important part of these audits.

In general, the most commonly used certifications in the Netherlands are those originating from the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) ([ISO21]). Examples of ISO/IEC-standards are the ISO/IEC 27001 (information security management), the ISO/IEC 27002 (information security), the ISO/IEC 27018 Information technology) and the ISO/IEC 29100 (for public cloud computing). In addition, the ISO27701 has been introduced in August 2019 as an extension of ISO27001. This standard focuses on the privacy information management system (PIMS). This particular standard assists organizations to establish systems to support compliance with the European Union General Data Protection Regulation (GDPR) and other data privacy requirements but as a global standard it is not GDPR specific.

Other privacy certification standards are for example the BS10012, European Privacy Seal (also named EuroPrise), the Europrivacy GDPR certification and private initiatives, like certification on the Data Pro Code ([NLdi21]). BS10012, as the British Standard on PIMS, is mostly replaced by the ISO27701. EuroPrise provides certifications that demonstrate that for example IT products and IT-based services, comply with the European data protection laws ([EuPr21]). Europrivacy GDPR certification, as is stated on their website, “provides a state-of-the-art methodology to certify the conformity of all sorts of data processing with the GDPR”. In the Netherlands, NL Digital, as an organization of ICT companies, has developed the Data Pro Code. This Code specifies the requirements of the GDPR for data processors. Due to their specific nature the Europrivacy GDPR certification and certification on the Data Pro Code are less commonly used in The Netherlands.

C. Privacy assurance versus privacy certification

The main difference between privacy assurance and certification is that assurance is more assignment-specific and in-depth. This is illustrated in Figure 1. In this figure, the main differences between privacy assurance based on ISAE/COS or Directive 3000 and Certification according to ISO 27701 are summarized.

C-2022-1-Amaador-01-klein

Figure 1. Comparison privacy assurance versus privacy certification (based on [Zwin21]). [Click on the image for a larger image]

Since the privacy reporting business hasn’t matured yet, privacy assurance and privacy certification can coexist and have their own benefits. Organizations that want to report on privacy should choose the way that suits their needs, which is dependent on their level of maturity for instance.

Privacy audits

Although a lot of knowledge and experience is available, performing an audit is an intensive process. This is especially the case for privacy audits. Since personal data is cross sectional, a separate area and not tangible, privacy audits are considered to be even more difficult.

This section describes typical aspects of privacy audits. As a model for describing these aspects, a privacy audit is considered to follow the phases shown in Figure 2.

C-2022-1-Amaador-02-klein

Figure 2. Privacy audit phases. [Click on the image for a larger image]

In general, the privacy audit phases look like the phases of “regular” audits. There are a few differences, however. One of the most important differences between regular and privacy audits is the determination of the scope, which is more difficult for privacy audits. A clear view of the incoming and outcoming flows of data and the way the data is processed within and outside the organization is a good starting point for privacy related efforts, and therefore also for scope determination. The processing register and DPIAs are other “anchors” that are useful. Data flows and the processing register list what data is processed in what system and which part can be considered personal data. DPIAs can provide further insight into the division of responsibilities, sensitivity of the data, applicable laws and relevant threats and vulnerabilities. Although all the aforementioned can help, there are still a few problems to be solved. The most important of these are the existence of unstructured data and the effects of working in information supply chains.

  • Unstructured personal data is data which is not stored in dedicated information systems. Examples of this type of data is personal data stored in Word or Excel files on hard disks or on server folders used in the office automation, such as personal data messages in mailboxes or personal data in physical files. Due to the unstructured character, scope determination is difficult by nature. Possible solutions for these situations can be found in tools which scan for files containing keywords which indicate personal data, like “Mr.”, “Mrs.” or “street”. A more structural solution can be found in data cleansing as part of archiving routines and “privacy by design” and “privacy by default” aspects of system adjustments or implementations. Whereas scanning is diverted to point solutions, archiving and system adjustments or implementations can help find a more structural solution.
  • Working in information supply chains leads to the problem that the division of responsibilities among involved parties is not always clear. In case of outsourcing relations, processing agreements can help clarify the relationships between what can be considered the processor and the controller. Whereas the relationships in these relatively simple chains are mostly straightforward, less simple chains like in Dutch healthcare or justice systems lead to more difficult puzzles. Although some clarification can be given in the public sector due to the existence of “national key registers” (in Dutch: “Basisregisters”), most of the involved relationships can best be considered as co-processor relationships, in which there are joint responsibilities. These relationships should be clarified one by one. In addition to co-processor relationships, there are those relationships in which many processor tasks lead to what can be considered controller tasks, due to the unique collection of personal data. This situation leads to a whole new view on the scoping discussion, with accompanying challenges.

Other difficulties performing a privacy audit arise from the Schrems-II ruling. As a result of this ruling, processing of personal data of European citizens under the so-called Privacy Shield agreement in the United States is considered to be illegal. Since data access is also data processing, the use of US cloud providers is to be considered illegal. Although there are solutions being specified like new contractual clauses and data location indicators, there is no entirely privacy-compliant solution available yet. Considering that the US secret service organizations are not bound to any privacy clauses and since European citizens are not allowed on the American privacy oversight board, there is still a leak.

Testing privacy controls is not simple either. Of course there are standard privacy control frameworks and the largest part of these frameworks consists of security controls and PIMS. There is a lot of experience with testing these. Testing controls which guard the rights of the data subjects, like the rights to be informed, access and rectification is more difficult, however. This difficulty arises from the fact that these controls are not always straightforward and testing these requires interpretation of policies and juridical knowledge. These difficulties can of course be overcome by making it explicit that an audit on privacy cannot be considered a legal assessment. This disclaimer is, however, not helpful in gaining the intended trust.

To improve the chance of successfully testing controls, most privacy audits are preceded by a privacy assessment advisory engagement. These advisory engagements enable the suggestion of improvements to help organizations, whereas audits, especially those devoted to assurance, leave less room to do so.

Reports resulting from privacy audits are mainly dictated by the assurance or certification standards, as described in the preceding section. The standard and resulting report should suit the level of maturity of the object and the trust needed so that maximum effect can be reached.

Added value of privacy audits

Privacy audits lead to several benefits and added value. In this section the most important are listed.

Building or restoring confidence – Like any audit performed for an assurance or certification assignment, a privacy audit is devoted to help build or restore confidence. This is even more so if the privacy audit leads to a quality mark.

Increasing awareness – Whether an audit leads to a qualified opinion or not, any audit leads to awareness. The questions raised and evidence gathered make employees aware. Since the relevance of privacy has increased over the past years, a privacy audit can help with prioritizing the subject within the organization as the outcomes could eventually lead to necessary follow-up actions that require the engagement of several employees/departments within the organization.

Providing an independent perspective – As mentioned before, privacy is not an easy subject. Therefore, subjectivity and self-interest are common pitfalls. Auditors can help avoid risks related to these pitfalls by independently rationalizing situations.

Giving advice on better practices – Auditors are educated to give their opinion, based on the last regulations and standards. Therefore, the auditors’ advice is based on better practices. Since privacy is an evolving and immature business, advising on better practices has taken a prominent role in their job and provided services.

Facilitating compliance discussions – Last not but not least, although auditors do not give an opinion on compliance, they facilitate compliance discussions inside and outside client organizations, due to their opinion on relevant criteria and controls. In this respect, the auditor can also help in discussions with supervisory boards. Assurance, certification and quality marks are proven assets in relationships with these organizations.

Client case: Privacy audits at RDW

A good example of how privacy reporting can be helpful are the privacy audits performed for the Dutch public sector agency that administers motor vehicles and driving licenses “RDW”.

RDW is responsible for the licensing of vehicles and vehicle parts, supervision and enforcement, registration, information provision and issuing documents. RDW maintains the “national key registers” (“Basisregisters”) of the Dutch government with regard to license plate registration in the “Basis Kentekenregister” (BKR) and the registration of driving licenses in the “Centraal Rijbewijzenregister” (CRB). In addition, RDW is processor of on-street parking data in the “Nationaal Parkeerregister” (NPR) for many Dutch municipalities.

Since there are many interests and there is a lot of personal data being processed, RDW is keen on being transparent on privacy control. KPMG takes care of privacy audits with respect to the abovementioned key registers, BKR, CRB and NPR, as RDW’s assurance provider.

Performing these audits, there are the aforementioned challenges with regard to scope. They are dealt with by, amongst others, restricting the scope to the lawfully and contractually confirmed tasks and descriptions in processing registers and PIAs. Furthermore, due to the fact that RDW has a three lines of defense model, with quality control and resulting reports as second line, they have managed to implement privacy controls as listed in the NOREA privacy control framework.

According to the RDW, privacy reports and marks are helpful in, for example, communication to partners in automotive and governmental information supply chains and with supervisory boards. Although there is a lot of innovation in conjunction with, for example, connected and autonomous vehicles, RDW states that they are able to manage accompanying challenges with regard to amongst others privacy protection. If unintentionally something happens like a data breach, RDW is in a good position to give an explanation, supported by audit results.

Position of privacy audits in GRC & ESG

ESG measures the sustainable and ethical impact of investments in an organization based on the Environmental, Social and Governance related criteria. Previous events – such as the Facebook privacy scandal in which user data could be accessed without their explicit consent of these users ([RTLN19]) – have shown that data breaches could raise a lot of questions from investors or even result in decreasing share prices. Insufficient GRC efforts regarding data privacy could even lead to doubts about the social responsibility of an organization.

As mentioned in previous sections, there are various ways for organizations to demonstrate their compliance with data privacy regulations. The importance of presenting the way an organization is dealing with data privacy is further emphasized with the introduction of ESG, since it demands privacy to be implemented from an Environmental, Social and Governance point of view as well.

The outcomes of privacy audits could be used as a basis for one of the ESG areas. Privacy audits could provide insights in the extent to which measures are effective and a means to monitor privacy controls. Also, findings that have been identified from a privacy audit could help in ESG as they make organizations aware of the improvements that they have to make to prevent these events in the future to (re)gain the trust of all relevant stakeholders, including (potential) investors.

Conclusion and final thoughts

Although privacy audits could not provide the ultimate answer on whether organizations comply with all applicable data privacy regulations, it does offer added value. Therefore, the answer to the earlier question on whether privacy audits are relevant for GRC and ESG is, according to us, undoubtfully: “yes, they are!”

Privacy audits cannot provide the ultimate answer to whether organizations comply with all applicable data privacy regulations, however. Using privacy audits, organizations obtain insights into the current state of affairs regarding data privacy management. The outcomes of a privacy audit could also increase further awareness within the organization, as it emphasized the shortcomings that had to be followed up or investigated by the relevant parties within the organization. Next to the benefits that the organization itself will have with the performance of a privacy audit, it facilitates discussions with third parties and supervisory boards when it comes to demonstrating compliance with data privacy regulations, especially when the privacy audit has resulted in a report provided by an independent external privacy auditor. Another advantage of having privacy audits performed is that it lays the foundation for further ESG in which an organization can describe the measures performed to ensure data privacy and the way how progress is monitored. This could explain why sustainable investments are ensured at the organization in question. Privacy audits are difficult, however, since personal data are cross sectional, a separate area and not tangible.

Outsourcing and working in information supply chains are upcoming trends. These trends will offer a lot of opportunities for those who want to make a profit. To gain maximum benefit, the focus of the involved organizations should not only be on offering reliable services; they should also have a clear vision on GRC and ESG aspects. Privacy should be one of these aspects, whereas balanced reporting on all of the aforementioned is the challenge for the future.

Notes

  1. The NBA and NOREA are the Dutch bodies for financial respectively IT auditing.

References

[EU16] European Union (2016). Regulation (EU) 2016/679 of the European Parliament and of the Council. Office Journal of the European Union. Retrieved from: https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:32016R0679&qid=1635868498095&from=EN

[EuPr21] EuroPrise (2021). EuroPrise – the European Privacy Seal for IT Products and IT-Based Services. Retrieved from: https://www.euprivacyseal.com/EPS-en/Home

[Euro21] Europrivacy (2021). Europrivacy Certification. Retrieved from: https://www.europrivacy.org

[ISO21] ISO (2021). Standards. Retrieved from: https://www.iso.org/standards.html

[Koor13] Koorn, R., & Stoof, S. (2013). IT-assurance versus IT-certificering. Compact 2013/2. Retrieved from: https://www.compact.nl/articles/it-assurance-versus-it-certificering/

[NLdi21] NLdigital (2021). Data Pro Code. Retrieved from: https://www.nldigital.nl/data-pro-code/

[NORE21] NOREA (2021). Privacy Audit Proof: Logo voor de betrouwbare verwerking van persoonsgegevens. Retrieved from: https://www.privacy-audit-proof.nl

[RTLN19] RTL Nieuws (2019, July 24). Recordboete voor Facebook van 5 miljard dollar om privacyschandaal. Retrieved from: https://www.rtlnieuws.nl/economie/bedrijven/artikel/4791371/recordboete-facebook-vijf-miljard-toezichthouder

[Zwin21] Zwinkels, S., & Koorn, R. (2021). SOC 2 assurance becomes critical for cloud & it service providers. Compact 2021/1. Retrieved from: https://www.compact.nl/articles/soc-2-assurance-becomes-critical-for-cloud-it-service-providers/

No AI risk if you don’t use AI? Think again!

AI-related risks regularly make news headlines and have led to a number of legislative initiatives in the areas of privacy, fair and equal treatment, and fair competition. This may cause organizations to shy away from using AI technology. AI risks, as commonly understood, are however caused largely by the degree of autonomy and the increasing social impact of data processing rather than just by new algorithms. These risks should be understood holistically as threats to entire IT infrastructures rather than to individual AI components. A broad, comprehensive and ongoing AI-related risk assessment process is essential for any organization that wants to be ready for the future.

Introduction

Computers don’t always do what you want, but they do what they were instructed to do. This clearly separates the computer as an agent performing a task from a human being doing the same. Computers as components in a business process are in essence predictable: their behavior follows a design specification, and the same input will generate the same output. People, on the other hand, are the unpredictable components of a business process. In practice, they often do not fully follow instructions. They deviate from the business process specification, for bad and for good reasons. People are autonomous.

On the one hand, people are a weak point and therefore form a major risk. They may be sloppy, slow, commit frauds, extract confidential data for their own purposes, be influenced by unconscious biases, etc. On the other hand, people often take the rough edges out of a business process. People use their own common sense, see new patterns in data, spontaneously remedy injustices they see, diagnose problems in the business process, are aware of changes in society that may affect business because they follow the news, and generally generate valuable feedback for adapting and continually improving business processes. People make processes more resilient.

Blackboxness

Popularly, AI technology is positioned somewhere between humans and computers. It has, in essence, a blackboxness problem. It may have some capacity to adapt to changes in its environment. It sometimes surprises us by finding predictive patterns in data we did not see. But its design specification does not lend itself to simulation of its behavior in our mind: the relation between input and output data is discovered by the AI technology itself. It is not predictable. Not to us. And it does make mistakes that humans will never make. Mistakes that are hard to explain. Sometimes the mistakes are even hard to notice.

Because blackboxness is bad English we will call it a complexity problem instead, keeping in mind that we do not have an objective measure of topological complexity in mind, but rather our inability to simulate what it does. AI technology is, therefore, complex.

AI-related risks regularly make news headlines, may cause significant reputation damage, and have led to a number of legislative initiatives and ethical frameworks in the areas of privacy, fair and equal treatment, and fair competition. The associated cost of introducing effective control measures may cause organizations to shy away from using AI technology, or to pick traditional, well-established techniques for data analysis in favor of more complex and more experimental ones. We see a preference for linear regression techniques in many organizations for exactly this reason. This is not a solution. While shying away from AI technology may be a valid choice in certain circumstances, it neither addresses the inherent risks nor necessarily exempts one from special legal responsibilities.

In this article we address the origin of some of the inherent risks, and the role AI and data play in these risks, and finally come to the conclusion that a broad, comprehensive and ongoing AI-related risk assessment process is essential for any organization that wants to be ready for the future.

Complexity essentially deals with how easy it is to simulate the behavior of a system in our mind, on the level of abstraction we care about. What we require of this simulation largely depends on our needs for explainability. For instance, a facial recognition application is, objectively from an information theoretic perspective, more complex than a simple risk-scoring model based on social-economic parameters. Since we usually do not wonder how we recognize faces, we tend to take its behavior on a functional level for granted, until we discover it makes mistakes we would not make. Only then we face a complexity problem.

Is it AI?

A first problem becomes apparent if we look at European legislative initiatives that create potentially expensive compliance requirements. There is no overarching agreement about the kinds of systems that create AI-related risks. This is not surprising because the risks are diverse and take many forms.

Let us quickly run by some examples. Art. 22 of the GDPR is already in effect and targets automated decision making using personal data – regardless of the technology used. Besides the limitations to personal data, there is a clear concern regarding the degree of autonomy of systems. The freshly proposed Artificial Intelligence Act ([Euro21a]) prohibits and regulates certain functions of AI based on risk categories – instead of starting from a restrictive definition of technology. For a civil liability insurance regime for AI ([Euro20]) it is too early to tell how it will turn out to work, but it makes sense that it will adopt a classification by function as well.

The Ethics Guidelines for Trustworthy AI ([Euro19]) on the other hand target technology with a certain adaptive – learning – capacity, without direct reference to a risk-based classification based on function. This is a restrictive technology-based definition, but one which leaves big grey areas for those who try to apply it. The Dutch proposed guideline for Government agencies ([Rijk19]) targets data-driven applications, without a functional classification, and without reference to learning capacity.

This already creates a complicated scoping problem as organizations need to determine which classifications apply to them and which do not. And beyond that there is legislation that directly impacts AI but does not directly addresses it as a topic. Existing restrictions on financial risk modeling in the financial sector obviously impacts AI applications that make financial predictions, regardless of the technology used. New restrictions on self-preferencing ([Euro21b]) will for instance impact the use of active learning technology in recommender algorithms but they will be technology-agnostic in their approach.

AI risk may address software that you already use and never classified as AI. It may address descriptive analytics that you use for policy making that you never considered as software, and don’t have a registration for. Your first task is therefore to review what is present in the organization and whether and how it is impacted by compliance requirements related to AI. Beyond that, seemingly conflicting compliance requirements will create interpretation problems and ethical dilemmas. For instance, when you have to choose between privacy protections of the GDPR on the one hand and measurable non-discrimination as suggested by the Artificial Intelligence Act on the other, and both cannot be fully honored.

Three dimensions of AI risk

All in all, we can plot the risk profile of AI technology on three different dimensions. Although the risks take diverse forms, the underlying dimensions are usually clear. The first one is the one we already identified as complexity.

But complexity is not the major source of risk. AI risk is predominantly caused by the degree of autonomy and the increasing social impact of data processing rather than just by new algorithms. Risks are often grounded in the task to be performed, regardless of whether it is automated or not. If how well the task is executed matters significantly to stakeholders, then risk always exists. This is the risk based on its social impact. If the automated system functions without effective oversight of human operators, it is autonomous. Autonomy is the third source of risk. We also regard it matter-of-factly autonomous if human operators are not able to perform the function of the system, either because they cannot come to the same output based on the available input data, or because they cannot do so within a reasonable time frame.

If an automated system scores on any of these three dimensions (see Figure 1), it may carry AI-related risk with it if we look at it within its data ecosystem. This is not because one single dimension creates the risk, but because a source of risk on a second risk dimension may be found nearby in the IT infrastructure, and we need to check that.

C-2021-2-Boer-01-klein

Figure 1. Three dimensions of AI risk [Click on the image for a larger image]

Data ecosystems

Most AI-related risks may also surface in traditional technology as decision making ecosystems are increasingly automated. Increasing dependence on automation within whole task chains causes human decision makers to increasingly go out of the loop, and the decision points at which problems could be noted by human decision makers in the task chain become few and far between. The risks are caused by the increasing autonomy of automated decision-making systems as human oversight is being reduced. If things go wrong, they may really go wrong.

These risks should be understood holistically as threats to entire IT infrastructures rather than individual AI components. We can take any task as our starting point (see Figure 2). When determining risk there are basically three directions to search for risk factors that we need to take into account.

Upstream task dependencies

If the task uses information produced by AI technology, it is essential to gain insight into the value of the information produced by the technology and the resilience of that information source, and to take precautions if needed. The AI technology on which you depend need not be a part of your IT infrastructure. If you depend on a spam filter for instance, you risk losing important emails and you need to consider precautions.

Downstream task dependencies

If a task shares information with AI technology downstream it is essential to understand all direct and indirect outcomes of that information sharing. Moreover, you may take specific risks, such as reidentification of anonymized information, or inductive bias that may develop downstream from misunderstanding the data you create, and you may be responsible for that risk.

Ecological task interdependencies

If you both take information from and share information to an AI component, fielding a simple task agent may increase your risks of being harmed by the AI component’s failure or be exploited by it. You should take strict precautions for misbehaviors of AI components that interact in a non-cooperative setting with your IT systems. Interaction between agents through communication protocols may break down in unexpected ways.

C-2021-2-Boer-02-klein

Figure 2. Where does the risk come from? [Click on the image for a larger image]

Ecologies of task agents are mainly found in infrastructures, where predictive models representing different parties as task agents function in a competitive setting. For instance, online markets and auctions for ad targeting. A systemic risk in such settings is that predictive models may cause a flash crash or collusion to limit open competition. Fielding a simple technological solution in a setting like that is usually not better than fielding a smart one from a risk point of view.

Information usually equates with data when we think about computers, but make sure to keep an eye on information that is shared in ways other than by data. If a computer opens doors, the opening of the door is observable to third parties and carries information value about the functioning of the computer. If you open doors based on facial recognition, discrimination is going to be measurable, purely by observation.

Data is not history

Nearly all avoidable failures to successfully apply AI-based learning from data find their origin in either inductive bias, systematic error caused by the data you used to train or test the system, or in underspecification, mainly caused by not carefully thinking through what you want the system to do ([Amou20]). And besides that, there are unavoidable failures if the relationship between input data and the desired output simply does not exist. This is mainly caused by uncritical enthusiasm for AI and Big Data.

If you are a Data Scientist, it is easy to jump to the conclusion that biases in models are merely a reflection of biased ways of working in the past because historical data is used. That conclusion is, however, too simple and conflates the meaning of information and data. Not all information is stored as data, and not all data that is stored was used as information for decision-making in the past.

The information we use to make decisions is changing, and even without AI technology this creates new risks. When we remove humans from decision making, we lose information that was never turned into data. Decisions are no longer based on information gleaned from conversations in face-to-face interactions between the decision maker and stakeholders. Even if we train models on historical data we may miss patterns in information that was implicitly present when that historical decision was taken.

Big data

At the same time, we are also tapping into fundamentally new sources of information and try to make predictions based on this. Data sharing between organizations has become more prevalent, and various kinds of data traces previously unavailable are increasingly mined for new predictive patterns. It is easy to make mistakes:

  • Wrongly assuming predictive patterns are invariant over time and precisely characterize the task, and will (therefore) reliably generalize from training and testing to operational use ([Lipt18]).
  • Overlooking or misinterpreting the origin of inductive biases in task dependencies, leading to an unfounded belief in predictive patterns.

Inductive bias may lead to discrimination of protected groups, besides other performance-related problems. To properly label measurable inequalities ([Verm18]) as discrimination you have to understand underlying causal mechanisms and the level of control you have over those. Lack of diversity in the workplace may for instance be directly traceable to the output of the education system. As a company you can only solve that lack of diversity at the expense of your competitors on the job market.

Simple rules

Big Data is not just used by AI technology. Insights from Big Data may end up in automated decision-making indirectly, through new business rules formulated by policymakers based on a belief in patterns deduced from descriptive statistics on large sets of data with low information density. In essence we are doing the same thing as the machine learning algorithm, with one big difference: there is a human in the loop who confirms that the pattern is valid and may be operationally used. The statistical pattern is translated into a simple rule as part of a simple and predictable form of automation. And therefore does not carry AI risks? In reality we run the same data-related risks as before: our simple rule may turn out to be less invariant than we thought, and it may be grounded in inductive biases that we overlooked.

AI as a mitigator of risk

The use of AI technology instead of something else could add to the already existing risk, but it might mitigate already existing risks too. One important business case for predictive models is risk scoring, which differentiates between high risk cases and low risk cases to determine whether they may be processed automatically by a fragile rule-based system or should be handled by a human decision maker. Another important application of AI technology is detecting changes in input patterns of other systems, to make sure warning bells start ringing if a sudden change is detected. The application of AI technology is the risk mitigation measure in this case. It is unfortunate if these options are discarded because AI technology is perceived as too risky.

Risk scoring models are increasingly used in government, insurance and the financial sector. The function of these models essentially works as a filter for the rule-based system, which is vulnerable to gaming the system risks because of its relative simplicity. The application of AI technology is intended to reduce the risk. KPMG Trusted Analytics has looked at risk mitigating measures taken at some government agencies to protect risk scoring models against biases. Any shortcomings we found thus far relate to the whole business process of which the risk scoring model is a part. The model itself hardly adds to the risk. Simple human-made selection rules used in those same processes were in our view considerably riskier.

A broad perspective on AI risk

While AI-related compliance responsibilities may focus on the technology itself, insight in risk necessitates looking at the environment in which the technology is fielded. Few risks are inherent in the technology itself. To determine your risk profile in terms of autonomy and social impact it is necessary to look at the whole business process and its business value to the organization and other stakeholders.

Besides that, understanding data lineage is of critical importance. In a modern data-driven organization, the same type of data may be used and produced by various applications, and the same application may be used for different purposes in different activities. This complexity can be managed to some extent by clearly splitting accountability for uses of data between data management teams, application development teams, and business users.

Responsibilities for understanding the environment you work in does not stop at the boundaries of the organization however. Third-party sourcing plays a key role, just like understanding your performance in competitive settings. In certain cases, setting up network arrangements or trusted third parties for keeping control over AI risk may turn out to be a solution to preventing unnecessary duplication of work.

Best practices regarding the privacy impact assessment (PIA) may be used as an analogy for a comprehensive AI risk assessment. In practice, many data-driven organizations have organized privacy impact assessments regarding:

  • datasets,
  • data-driven applications, and
  • data-processing activities.

This way of working reflects an important insight about data ethics. Ethical principles about the use of personal data usually relate to either:

  • reasons for collecting and storing data about people, and dealing with information about, and modification and deletion of that data,
  • reasons for making such data available for use by an application, and the privacy safeguards built into that application, or
  • specific purposes that such an application is put to in data-processing activities throughout the organization, and process-based privacy safeguards in those environments.

In a modern data-driven organization, the same type of data may be used and produced by various applications, and the same application may be used for different purposes in different activities. The relation between personal data and the uses to which it is put may therefore be complex and hard to trace. This complexity is managed by splitting accountability for the data between data management teams, application development teams, and business users.

Conclusion

A broad, comprehensive and ongoing AI-related risk assessment process is essential for data-driven organizations that want to be ready for the future, regardless of whether they aim to use AI. Local absence of AI technology does not absolve you from responsibilities for AI-related risk. The big question is how to organize this ongoing risk assessment process. One element of the solution is organizing accountability for uses of data between data management teams, application development teams, and business users. Another common element of the solution may be the formation of network arrangements with other parties to reduce the cost of control. An element that is always needed, and one that the KPMG Trusted Analytics team aims to provide for its customers, is a long list of known AI-related risk factors. And another long list of associated controls that can be used to address those risks from a variety of perspectives within an organization or a network of organizations. The first step for an organization is taking the strategic decision to take a good look at what its AI-related risks are and where they come from.

References

[Amou20] D’Amour, A., Heller, K., Moldovan, D., Adlam, B., Alipanahi, B., Beutel, A., … & Sculley, D. (2020). Underspecification presents challenges for credibility in modern machine learning. arXiv preprint arXiv:2011.03395.

[Angw16] Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016, May 23). Machine Bias: There’s software used across the country to predict future criminals. And it’s biased against blacks. ProPublica. Retrieved from: https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing

[Eise11] Eisen, M. (2011, April 22). Amazon’s $23,698,655.93 book about flies. Retrieved from: https://www.michaeleisen.org/blog/?p=358

[Euro19] European Commission (2019, April 8). Ethics guidelines for trustworthy AI. Retrieved from: https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai

[Euro20] European Parliament (2020, October 20). Recommendations to the Commission on a civil liability regime for artificial intelligence. Retrieved from: https://www.europarl.europa.eu/doceo/document/TA-9-2020-0276_EN.html

[Euro21a] European Commission (2021, March 17). Proposal for a Regulation laying down harmonised rules on artificial intelligence (Artificial Intelligence Act). Retrieved from: https://digital-strategy.ec.europa.eu/en/library/proposal-regulation-laying-down-harmonised-rules-artificial-intelligence-artificial-intelligence

[Euro21b] European Commission (2021). The Digital Services Act Package. Retrieved 2021, May 10, from: https://digital-strategy.ec.europa.eu/en/policies/digital-services-act-package

[Feis21] Feis, A. (2021, April 11). Google’s ‘Project Bernanke’ gave titan unfair ad-buying edge, lawsuit claims. New York Post. Retrieved from: https://nypost.com/2021/04/11/googles-project-bernanke-gave-titan-unfair-ad-buying-edge-lawsuit/

[Geig21] Geiger, G. (2021, January 5). Court Rules Deliveroo Used ‘Discriminatory’ Algorithm. Vice. Retrieved from: https://www.vice.com/en/article/7k9e4e/court-rules-deliveroo-used-discriminatory-algorithm

[Lipt18] Lipton, Z. C. (2018). The Mythos of Model Interpretability: In machine learning, the concept of interpretability is both important and slippery. Queue, 16(3), 31-57.

[Quac20] Quach, K. (2020, July 1). MIT apologizes, permanently pulls offline huge dataset that taught AI systems to use racist, misogynistic slurs. The Register. Retrieved from: https://www.theregister.com/2020/07/01/mit_dataset_removed/

[Rijk19] Rijksoverheid (2019, October 8). Richtlijnen voor het toepassen van algoritmes door overheden. Retrieved from: https://www.rijksoverheid.nl/documenten/rapporten/2019/10/08/tk-bijlage-over-waarborgen-tegen-risico-s-van-data-analyses-door-de-overheid

[Verm18] Verma, S., & Rubin, J. (2018, May). Fairness definitions explained. In 2018 IEEE/ACM International Workshop on Software Fairness (FairWare) (pp. 1-7). IEEE.

Handling data transfers in a changing landscape

Although privacy has long been a discussion point within technology, its role in the use of Cloud services has not always demanded close attention. This changed in 2020, when the Schrems II ruling invalidated Privacy Shield. As a result, companies who relied on Privacy Shield for data transfers to the U.S., including the use of Cloud services, are now non-compliant and must take action. In this article, we will take a closer look at the impact of the ruling, and steps that organizations can take to manage the consequences.

Introduction

Organizations that are based in the EU/EEA and that have data exchanges with companies outside of the EU/EEA, have to meet new EU requirements that require the revision of contracts, the performance of additional jurisdiction analysis and the implementation of measures to mitigate the gaps.

What?

Stricter requirements for companies engaging in data exchanges with third parties or recipients outside of the EU/EEA, following from the Schrems II judgement.

Impact

Contract revisions and remediating actions are required.

Timeline

The ruling of the Central European Court of Justice (CJEU) took place on the 16th of July 2020, invalidating Privacy Shield with immediate, and retroactive, effect.

Fines

As non-compliance would result in non-compliance with GDPR, fines of up to 4% of annual revenue, or 20 million euros, are possible.

Scope

EU-US data transfers (including access to data) which were reliant on Privacy Shield as their transfer mechanism.

In the Schrems II case of July 2020, the European Court of Justice ruled that the Privacy Shield is no longer a valid means of transferring personal data to the U.S. The important players in the cloud services domain, like Amazon, Microsoft, Google and IBM are, however, based in the U.S. In most cases it is not a realistic option to look for alternative cloud services outside of the U.S. That does not mean it ends there. For example, it is important to consider the level of encryption and the existence of model contracts. In this article we gathered important considerations that every organization should take into account when using a US-based Cloud provider where data is transferred to or accessed from the US.

A brief overview of the context

What was the Privacy Shield?

In some countries outside the European Union (EU) there are no or less stringent privacy laws and regulations in comparison to those of the EU. In order to enable the same level of protection for EU citizens, the General Data Protection Regulation (GDPR) rules that personal data cannot be transferred to persons or organizations outside of the EU, for example the US, unless there are adequate measures in place. In this manner, the GDPR ensures that personal data of EU citizens are also protected outside the EU. Organizations can only transfer personal data outside of the EU to so-called ‘third countries’ when there is an adequate level of protection, comparable to that of the EU.

The US does not offer a comparable level of protection, because there is no general privacy law. Because organizations in the EU transfer personal data on a large scale and on a daily basis to the US, a new data treaty was adopted in 2016 – the Privacy Shield (successor of Safe Harbour). Under the Privacy Shield, US-based organizations could certify themselves, claiming they complied with all privacy requirements deriving from GDPR.

What happened in the Schrems II case?

The Schrems II case owes its name to Max Schrems, an Austrian lawyer and privacy activist who put the case forward. He was already known from the Schrems I case in 2015, in which the European Court of Justice declared that Safe Harbour (the predecessor of Privacy Shield) was no longer valid. The same fate now hits the Privacy Shield.

In the Schrems II case, Max Schrems filed a complaint against Facebook Ireland (EU), because they transferred his personal data to servers of Facebook Inc., which are located in the US. Facebook transferred this data on the basis of the Privacy Shield. Schrems’ complaint was, however, that the Privacy Shield offered insufficient protection. According to American law, Facebook Inc. is obliged to make personal data from the EU available to the American authorities, such as the National Security Agency (NSA) and the Federal Bureau of Investigation (FBI). 

In the Schrems II case, the Court investigated the level of protection in the US. Important criteria are the existence of ‘adequate safeguards’ and if privacy laws of EU citizens are ‘effective and enforceable’. The Court concluded that under American law, it cannot be prevented that intelligence agencies use personal data of EU citizens, even when this is not strictly necessary. The only legal safeguard that the US offers, is that the intelligence activities need to be ‘as tailored as feasible’. The Court ruled that the US is processing personal data of EU citizens on a large-scale, without offering an adequate level of protection. The Court also ruled that European citizens do not have the same legal access as American citizens. The activities of the NSA are not subject to judicial supervision, and there is no means to appeal it. The Privacy shield ombudsman for EU citizens is not a court and does not offer adequate enforceable protection. In short: Privacy Shield is now invalid.

This ruling has far-reaching consequences, given that a large number of EU based companies using cloud providers use a US-based provider. It is important to note that the liability rests on the organization who “owns” the data and exports it, not on the Cloud Provider. Therefore, it is critical that measures are taken so that running business as usual is not jeopardized. There are a number of steps which organizations can take to minimize the impact of this ruling and ensure continued compliance with GDPR. We have outlined these for you, to help you on your Cloud compliance journey.

Working towards privacy conscious Cloud Compliance

Changing US -based Cloud providers, for EU-based ones will in many cases not be desirable or feasible, regardless of being the most compliant approach for handling EU data in the cloud post-Schrems II. Thankfully there are alternatives. There are three key elements to consider when beginning the journey towards compliance:

  • Data mapping – understanding where data transfers exist within the organizations
  • Contractual measures – using legal instruments in managing transfers with third parties
  • Supplementary measures – reducing risks through enhanced protection

Each of these items is explored in greater depth in the following sections, bringing together recommendations from the European Data Protection Board, and best practices.

C-2021-1-Huijts-01-klein

Figure 1. Do not wait to take action; start taking steps towards remediation. [Click on the image for a larger image]

1. Know thy transfers – data mapping is key

It is a bit of a no-brainer, although no less crucial: the first step is knowing to which locations your data is transferred. It is essential to be aware of where the personal data goes, in order to ensure that an equivalent level of protection is afforded wherever is it processed. However, mapping all transfers of personal data to third countries can be a difficult exercise. A good starting point would be to use the record of processing activities, which organizations are already obliged to maintain under the GDPR. There are also dedicated software vendors, such as OneTrust, RSA Archer and MetricStream, in the market that are proven to be very helpful in gathering all this (decentralized) information. Keep in mind that next to storage in a cloud situated outside the EEA, remote access from a third country (for example in support situations) is also considered to be a transfer. More specifically, if you are using an international cloud infrastructure, you must assess if your data will be transferred to third countries and where, unless the cloud provider clearly states in its contract that the data will not be processed at all in third countries. The following step is verifying that the data you transfer is adequate, relevant and limited to what is necessary in relationship to the purposes for which it is transferred.

2. What about standard contractual clauses?

Once you have a list of all transfers to a third country, the next step is to verify the transfer tool, as listed in chapter V of the GDPR, which your transfers rely on. In this article, we will not elaborate on all the transfer tools. We will instead focus on what is relevant for the use of cloud services in the US. That means that we assume that the transfers fall under ‘regular and repetitive’, occurring at frequent and reoccurring intervals, e.g. having direct access to a database. Therefore, no use can be made of the exception for ‘occasional and non-repetitive transfers’, which would only cover transfers taking place outside of regular course of business and under unknown circumstances, such as an emergency.

An option that exists for internal transfers within your organization, is to incorporate Binding Corporate Rules. However, most organizations have their cloud services outsourced, and therefore the most logical transfer tool to address in this article is that of standard contractual clauses (SCCs), also sometimes referred to as model contracts. SCCs however, do not operate in a vacuum. In its Schrems II ruling, the Court reiterates that organizations are responsible for verifying on a case-by-case basis if the law or practice of the third country impinges on the effectiveness of the appropriate safeguards.

Relevant factors to consider in this regard are:

  • the purposes for which the data are transferred;
  • the type of entities involved (public/private; controller/processor);
  • the sector (e.g. telecommunication, financial);
  • the categories of personal data transferred;
  • whether the data will be stored in the third country or only remotely accessed; and
  • the format (plain text, pseudonymized and/or encrypted).

Lastly, you will need to assess if the applicable laws impinge on the commitments contained in the SCC. Because of Schrems II, it is likely that the U.S. impinges on the effectiveness of the appropriate safeguards in the SCC. Does that mean it ends there, and we cannot make use of US-based cloud services anymore? It does not. In those cases, the Court leaves the possibility to implement supplementary measures in addition to the SCCs that fill these gaps in the protection and bring it up to the level required by EU law. In the next paragraph we uncover what this entails in practice.

3. Supplementary measures

In its recommendations 01/2020, the European Data Protection Board (EDPB) included a non-exhaustive list of examples of supplementary measures, including the conditions they would require to be effective. The measures are aimed at reducing the risk that public authorities in third countries endeavor to access transferred data, either in transit by accessing the lines of communication used to convey the data to the recipient country, or while in custody by an intended recipient of the data. These supplementary measures can have a contractual, technical or organizational nature. Combining diverse measures in a way that they support and build on each other can enhance the level of protection. However, combining contractual and organizational measures alone will generally not overcome access to personal data by public authorities of the third country. Therefore, it can happen that only technical measures are effective in preventing such access. In these instances, the contractual and/or organizational measures are complementary, for example by creating obstacles for attempts from public authorities to access data in a manner not compliant with EU standards. We will highlight two technical supplementary measures you may want to consider.

Technical measure: using strong encryption

If your organization uses a hosting service provider in a third country like the US to store personal data, this should be done using strong encryption before transmission. This means that the encryption algorithm and its parameterization (e.g., key length, operating mode, if applicable) conform to the state-of-the-art and can be considered robust against cryptanalysis performed by the public authorities in the recipient country taking into account the resources and technical capabilities (e.g., computing power for brute-force attacks) available to them. Next, the strength of the encryption should take into account the specific time period during which the confidentiality of the encrypted personal data must be preserved. It is advised to have the algorithm verified, for example by certification. Also, the keys should be reliably managed (generated, administered, stored, if relevant, linked to the identity). Lastly, it is advised that the keys are retained solely under the control of an entity within the EEA. The main US-based cloud providers like Amazon Web Services, IBM Cloud Services, Google Cloud Platform and Microsoft Cloud Services will most likely comply with the strong encryption rules.

Technical measure: transferring pseudonymized data

Another measure is pseudonymizing data before transfer to the US. This measure is effective under the following circumstances: firstly, the personal data must be processed in such a manner that the personal data can no longer be attributed to a specific data subject, nor be used to single out the data subject in a larger group, without the use of additional information. Secondly, that additional information is held exclusively by the data exporter and kept separately in the EEA. Thirdly, disclosure or unauthorized use of that additional information is prevented by appropriate technical and organizational safeguards, and it is ensured that the data exporter retains sole control of the algorithm or repository that enables re-identification using the additional information. Lastly, by means of a thorough analysis of the data in question – taking into account any information that the public authorities of the recipient country may possess – the controller established that the pseudonymized personal data cannot be attributed to an identified or identifiable natural person even if cross-referenced with such information.

Conclusion

In summary, it is important to begin remediation action in light of Schrems II. Good hygiene is important, so start with data mapping, and knowing in which processing activities the transfers to third countries happen. Next, make an assessment on which transfer tool (e.g Privacy Shield) these international transfers are based. For now, SCCs appear to be the way forward when transferring to the US, supported by technical and organizational supplementary measures. To determine which supplementary measures to apply, you should assess the risk of each transfer through a Transfer Impact Assessment, based on at least the following criteria:

  • Format of the data to be transferred (plain text/pseudonymized or encrypted);
  • Nature of the data;
  • Length and complexity of data processing workflow, number of actors involved in the processing, and the relationship between them;
  • Possibility that the data may be subject to onward transfers, within the same third country or outside.

Based on this risk, decide which supplementary technical, contractual and organizational measures are appropriate. Make sure you work together with your legal and privacy department throughout the process. Do not wait to take action. Schrems II took immediate effect, and incompliance as a data exporter (i.e. the party contracting the Cloud provider) has the potential for high financial and reputation damage.

References

[AWP17] Article 29 Data Protection Working Party (2017). Adequacy Referential. Retrieved from: https://ec.europa.eu/newsroom/article29/item-detail.cfm?item_id=614108

[ECJ20] European Court of Justice (2020). Data Protection Commissioner v Facebook Ireland and Maximillian Schrems. Case C-311/18. Retrieved from: http://curia.europa.eu/juris/document/document.jsf;jsessionid=6CD30D2590A68BE18984F3C86A55271E?text=&docid=228677&pageIndex=0&doclang=EN&mode=req&dir=&occ=first&part=1&cid=11656651

[EDPB20a] European Data Protection Board (2020). Recommendations 01/2020 on measures that supplement transfer tools to ensure compliance with the EU level of protection of personal data. Retrieved from: https://edpb.europa.eu/sites/edpb/files/consultation/edpb_recommendations_202001_supplementarymeasurestransferstools_en.pdf

[EDPB20b] European Data Protection Board (2020). Recommendations 02/2020 on the European Essential Guarantees for surveillance measures. Retrieved from: https://edpb.europa.eu/sites/edpb/files/files/file1/edpb_recommendations_202002_europeanessentialguaranteessurveillance_en.pdf

[EuPa16] European Parliament and Council of European Union (2016). Regulation (EU) 2016/679. Retrieved from: https://eur-lex.europa.eu/eli/reg/2016/679/oj

Cross-system segregations of duties analysis in complex IT landscape

This article explains the importance of access controls and segregation of duties in complex IT landscapes and elaborates on performing segregation of duties (SoD) analyses across multiple application systems. Practical tips for performing SoD analyses are outlined based on the lessons learned from a SoD project at a multinational financial services company. In this project, the Sofy Access Control platform solution was implemented to automate the SoD analysis and to overcome the challenges with SoD conflicts in an effective manner.

The importance of access controls and segregation of duties

In a world where (digital) knowledge is power and the vast majority of all businesses work digitally to a large extent, security is an important element of the IT environment. Given that IT is more connected than ever, forming digital platforms, the need for a holistic security view over multiple platforms grows. Within security, the domain of access controls is tasked with the management of permissions, determining who can do what in an IT system. Setting the right level of permissions within a system is always a balancing act. If you set permissions too narrow, the system will become unworkable, but if you set permissions too broad, there will be an increased risk of security breaches. With employees switching functions, adding or dropping responsibilities and corresponding functions in applications, access management should be seen as an ongoing process.

Typically, the domain of access management consists of multiple safeguards or controls within the processes to ensure that the permissions handed out remain within the boundaries to keep it workable, which prevents security issues. The most common safeguards are the following:

  1. User management procedures
  2. Authorization (concept) reviews
  3. Segregation of Duties (SoD) monitoring

Based on these areas, SoD monitoring is considered the most challenging for a number of reasons. Firstly, applications and their permission structures can be complex. Depending on the (type of) system, permissions can be either determined and granted in a structural way, for example through roles or profiles with (multiple layers of) underlying menus, functions, permissions or privileges, or in a less structural way by assigning individual permissions to a user. An example of an application with multiple levels is displayed in Figure 1.

C-2020-3-Vugt-01-klein

Figure 1. Example of layered access levels (based on Microsoft Dynamics). [Click on the image for a larger image]

Secondly, combinations of assigned permissions or roles within the application need to be taken into account. It is insufficient to review the structure and the individual assignments to a user (e.g. a user has a role and can therefore execute a specific task in the application), as it will not detect the ability of a user to perform a task in the application by means of combination of permissions stemming from multiple roles.

Lastly, highly depending on the context, SoD conflicts might be inevitable. Whether it is an employee who needs a (temporary) backup colleague or a team that is simply too small to be split up for performing multiple tasks; sometimes it is just undesirable from a business efficiency perspective to enforce SoDs on a permission level in the application.

Businesses that strive to implement solid SoD monitoring and overcome the challenges, should take a structured approach. KPMG has developed a standardized method to put SoD monitoring and follow-up in place. This is a generic approach, not focused on specific technologies or (ERP) applications. The method consists of six steps that are required for control with respect to SoDs (see Figure 2):

  1. Risk Identification. Identify risks and the related SoD rules in business processes.
  2. Technical Translation. Translate critical tasks into technical definitions of user permissions based on data extracted from the applicable application.
  3. Risk Analysis. Use the data from your application(s) to analyze if users have possible combinations of critical tasks that are not allowed. These are called SoD conflicts.
  4. Risk Remediation. Remediate the risks and fix the SoD conflicts by changing or revoking access rights from users.
  5. Risk Mitigation. In case remediation is not possible or undesirable, mitigate the risks by implementing (automated) controls.
  6. Continuous Compliance. Implement measures and tools to structurally monitor SoD conflicts, follow up on conflicts and demonstrate compliance to business owners, regulators and other stakeholders.

Later (see the section “Ten lessons learned for cross-system SoD monitoring”), we will elaborate on this method with practical examples of how these steps were used in the SoD project at a global financial services company which is primarily focused on leasing products.

C-2020-3-Vugt-02-klein

Figure 2. Example of layered access levels (based on Microsoft Dynamics). [Click on the image for a larger image]

The need for cross-system SoD analysis

Stemming from a period in which organizations only had one main application system supporting all key processes, the typical SoD analysis is focused on one single application. This would then usually be focused on either the Enterprise Resource Planning-system (ERP) or the financial or General Ledger (GL) system. Recent trends, such as the movement of applications towards the cloud, digitization and platform thinking put an emphasis on (inter-)connectivity of applications within the IT environment.

As a result, more and more organizations are abandoning the idea of one single application in which all activities are performed, reintroducing point solutions.

Figure 3 shows a schematic image of business processes within one single ERP application in comparison to the scattered landscape of a company that uses multiple applications in Figure 4.

C-2020-3-Vugt-03-klein

Figure 3. Overview ERP system: example a company using the modules within a single SAP ERP system. [Click on the image for a larger image]

C-2020-3-Vugt-04-klein

Figure 4. Overview scattered landscape: example of a financial services company using separate systems for set of processes. [Click on the image for a larger image]

When an organization decides to spread its processes over multiple applications, access management will, as a consequence, have to be maintained for and synchronized across all applications. Correspondingly, the risk function will have to follow suit and make sure that the controls to prevent increasing security breaches or SoD conflicts are in place for multiple applications. As such, SoDs should be monitored across multiple applications. A specific example which is considered crucial would be the access to the banking or payment environment of an organization. Often fed by either the GL or ERP system, payment orders are sent to the bank for further approval and execution. A good example of the criticality of cross-system SoD conflicts is the situation in which in the GL or ERP systems is defined which parties should be paid for what amount, and where the actual payment (or approval thereof) is performed within the banking application. Imagine a user being able to administer bank accounts in the GL or ERP system and having the ability to approve payments within the banking system, enabling enrichment. In conclusion, as organizations have multiple applications within their IT environment covering their main business processes, user and access management should also be performed with a holistic view, overseeing all relevant applications.

Ten lessons learned for cross-system SoD monitoring

Introduction to the financial services company case

The financial services company (hereafter: the client) has identified challenges with respect to user access, user rights and Segregation of Duties (SoDs). Resolving the SoD issues has been proven complex due to several root causes such as unclear roles and responsibilities, knowledge gaps and limitations to application information. A more centrally coordinated approach – coupled with local (business) responsibility and accountability – was required to successfully resolve these SoD issues and to design and implement a process to prevent similar issues going forward. The client embarked on a project – in corporation with KPMG – in which they analyzed and followed-up on possible cross-system SoD conflicts. The KPMG platform SOFY was implemented to support this project and is still being used as a tool to continuously demonstrate compliance. More than 40 applications used in more than 15 countries (local offices) were on-boarded on the platform by which the client was able to measure and analyze and mitigate cross-system SoD conflicts.

1 Starting with a well-defined risk-based policy

As a starting point for the SoD analysis, it is important that the content of the SoD policy is carefully drafted. The client has developed a “Global Policy on Segregation of Duties” covering mandatory SoD principles. The SoD principles are applicable to the client’s core transactional systems covering the core lease initiation and contract management processes. These primarily relate to front-office, back-office, general ledger and pay-out / e-banking system. An example of such a principle is: “The person that activates contracts cannot be involved in payment activities.”

As a minimum, the combinations of critical activities and application functionality that are not allowed to be combined should be described, avoiding related risks.  (see Figure 5). Where discussions on authorizations regarding business and IT might become complex, confusing or overwhelming, a sharply defined policy helps to easily conclude whether these combinations are accepted or lead to a SoD conflict and if so, which risks should be mitigated.

C-2020-3-Vugt-05-klein

Figure 5. Example of a SoD policy configured in SOFY, including a set of SoD rules and related risks. [Click on the image for a larger image]

2 Defining critical activities using raw data and low level of detail

In order to start the SoD analysis, the SoD policy has to be translated into local permissions for each application in scope for the analysis. It is important to have the correct starting point for this translation; an overview of the local permissions of the application. It is recommended to use raw data and the lowest level of detail at which authorizations are configured, as this can be beneficial to prove completeness to other stakeholders (such as external auditors). At the client, project raw data was used (e.g. unedited dumps without any filtering, restrictions or other logic) so that any filtering and logic is applied within the analysis. IT people, key users and external IT partners/vendors were involved to determine the relevant tables and to extract those tables.

The reason to define critical activities at the lowest level of detail at which they are configured within the application, is to make the analysis more robust (see Figure 6). If we take a sample application in which authorizations are handed out at screen/menu level as the lowest level of detail and in the future the menu needed to execute the task is removed from the role, the definition at role level will no longer be valid, whereas the definition at menu level (which has to a derivation of the authorization structure from role to screen level) will be updated automatically.

C-2020-3-Vugt-06-klein

Figure 6. Critical activity definitions as defined in SOFY. The “access levels” refer to actual permissions required to perform the activity. [Click on the image for a larger image]

3 Setting up the analysis through intensive collaboration with both business and IT

When using raw data extracts as described previously, there is a risk the key user will not recognize the technical names of the menus (or other levels of permissions) when making the translation from  critical activities into technical names in the application. In order to prevent that from happening, it is recommended to organize sessions with both IT and the business (primarily key users). During these sessions in the SoD project at the client, the business provided (practical) input by showing how tasks are performed within the application, whereas IT assisted in making the transition to the underlying technical permissions and the extraction of that data. We have called this activity the “technical translation”, as shown as step 2 of the “SoD Monitoring Model” discussed earlier (see Figure 2). Walkthroughs have proven to be effective and efficient ways of discovering all tasks that can be performed within an application. Alternatively, determining which users can execute which tasks is done by looking at historical transactional data (and depending on the information stored also which permissions they used).

The outcome of this analysis should result in a conclusion at the same level for each application (e.g. user X is able to execute task 1). Even though the underlying permission structure can be different for each application, it is important to conclude at the same level as input for the SoD analysis itself. Therein, the combination of critical tasks on a user level will be calculated and reported as a “hit” when these tasks are present as conflicting tasks within the SoD policy.

4 Linking users within an application to an employee ID

In order to be able to analyze permissions of the same employee over multiple applications, it needs to be identified which user accounts within the applications belong to the same employee. Within the client’s company, some employees were linked to different usernames over multiple applications due to application-specific constraints or naming conventions. As such, it is recommended to determine a unique employee ID (e.g. personnel number) and link each of the application users to that employee ID. Linking these accounts is preferably processed automatically in order to prevent manual, repetitious and error-prone efforts (see Figure 7). Prerequisites for automation would be a logical naming convention and sufficient user details (e.g. e-mail address or personnel number) stored within the application to create the links.

It is recommended to periodically review the list of users for which automated linking to the employee ID could not be performed, as it is a prerequisite for a cross-system SoD analysis. The SOFY solution provides functionality to maintain the user mappings.

C-2020-3-Vugt-07-klein

Figure 7. User maintenance function as implemented at the client, automatically mapping application user IDs to an employee. [Click on the image for a larger image]

5 Using tooling to validate the analysis in quick iterations

When discussing application permissions with key users and IT to define the critical tasks (as described in step 4), it can become quite abstract. In order to make the effects of the agreed upon definitions tangible, we recommend working with tooling. At the client, the SOFY platform was used to demonstrate the effects of including or omitting single permissions from the definition for an application by simulating the results (when applying the current definition). SOFY Access Control is a tailored tool with dashboards, KPIs and functions to analyze and dive into SoD conflicts and underlying user permissions (see Figure 8).

In the client’s SOFY dashboards, it was chosen to focus on user and role level analysis. This means that the results include numbers and details of the users and roles with SoD conflicts, so root causes of conflicts could be analyzed on either one of these levels. This set-up of the analysis in the tool facilitates completeness in the analysis and follow-up; if no users and no user roles within applications contain SoD conflicts (including cross-system), there will no longer be any SoD risks.

Once the SoD analysis has been set up initially, and raw data extracts are used as described, the effort needed to automate the extraction process and conduct a cross-system SoD analysis is usually limited. As authorizations change over time, due to employees joining, leaving or moving within the organization, controls related to authorizations and SoDs are typically executed periodically. In doing so, this will contribute to (the demonstration of) more control on the access management domain.

C-2020-3-Vugt-08-klein

Figure 8. SoD Conflicts Overview dashboard; including timelines and functionalities to filter and “dive” into the results. [Click on the image for a larger image]

6 Following up on results in a phased manner

Once the analysis is done, it is time to start working with the results. The first step should always be to validate the outcome of the analysis. If the analysis is not correct, despite the efforts of key users and IT, it should be corrected. Any follow-up that should normally be performed can then be skipped, making it the most efficient option. However, when the analysis is validated and correct, follow-up should be done in a phased manner. We recommend starting the clean-up of the “low-hanging” targets to get familiar with the way of thinking and gain some momentum within your organization as a result of the improvement experienced. The following categories are considered “easy” categories:

  1. Inactive users having SoD conflicts
  2. Employees having multiple user accounts
  3. Super users having (all) SoD conflicts
  4. Roles with inherent SoD conflicts

These follow-up activities have taken place in a structured manner in the client’s company, based on the details provided by the analysis and automated results in the SOFY application. For each of the abovementioned follow-up categories, clear dashboards, KPIs and overviews were created.

7 Assigning responsibilities at a local, proper level

In order to address SoDs and resolve possible SoD conflicts, it is critical to have good operational governance. The client started with a project in which addressing the urgency and local responsibility for each of the company locations involved were important goals. The right tone at the top and proper post-project day-to-day governance made sure the organization kept paying attention to SoDs. To stay in control, the responsibilities of the three lines of defense were described as follows:

  • 1st line (business and IT): Review and resolve SoD conflicts, functional sign-off translation roles/permissions into critical tasks (yearly process), advise within key IT projects: Risk authorization work stream, etc.;
  • 2nd line: Maintain the SoD Policy, authorize possible exceptions, validate business controls (e.g. mitigating controls) of locations/countries, etc.;
  • 3rd line: Conduct periodic reviews on implementation and effectiveness of SoD controls, perform reviews on mitigating controls, etc.

Like already addressed in lesson 7, the use of tooling is recommended to make this governance structure and processes feasible. Especially with the tasks outlined above, those involved should have access to proper tooling. For instance, the second line at the client has access to a KPI summary dashboard for monthly monitoring (targets) and managing the results (see Figure 9).

C-2020-3-Vugt-09-klein

Figure 9. Summary Dashboard of SOFY Access Control. [Click on the image for a larger image]

8 Only accept mitigation after exploring remediation options

Once responsibilities are assigned and the organization operates according to the designated Lines of Defense (see previous lesson), there are multiple ways to respond to SoD conflicts. Either remediation of the conflict by resolving the root cause or mitigation by minimizing or removing the resulting risk of the SoD conflict. Before handing out targets on reducing the number of conflicts, it is beneficial to reflect on the desired solution to the SoD conflicts first. Whereas mitigation of SoD conflicts is most often quickly arranged by implementing an additional check or another control (e.g. periodically take a sample for a number of transactions and check their validity), this might not be the preferred approach for an overall SoD solution. Especially when performing cross-system SoD analysis, there might be several different routes to a SoD conflict (when a task can be executed in multiple applications, this results in multiple routes for a SoD conflict), which in turn might require its own specific additional control.

When remediating, the solution is often slightly more difficult. Either adjusting the roles within the application, removing permissions from users (which can be accompanied with challenging discussions on why they do need the access) or adding a control within the application, all require more effort than coming up with an additional control. However, these solutions do tend to provide a more permanent resolution to the SoD conflict, saving the periodical effort of having to perform the additional control. The SOFY platform was leveraged in the project to bring down the effort needed for remediation as well, for example by simulating the effects of removing permissions from roles.

9 Utilizing the available data for additional insights

Gathering authorization data of multiple applications in a single location, in combination with increased awareness and momentum on access management, can result in non-SoD related improvements. For example: the same data needed to analyze the permissions of a user or a role within an application can also be used to perform the regular user/role reviews as part of the access management controls. Even better, instead of reviewing all individual permissions contained in a role or assigned to a user, reviewing the critical activity level can be performed as well, making the review more efficient.

Secondly, due to the links between an application user and an employee, it is very easy to detect if there are any employees having multiple user accounts for one single application. This also helps to clean up the system as part of user management activities. Lastly, when combined with sources such as the Active Directory, interesting new insights can be generated. Employees that have left the organization (and are marked as such in the Active Directory) and are linked to active users in applications, can be easily listed. These exception reports are supportive with keeping an application authorizations clean and in reducing SoD conflicts.

These examples were implemented as KPIs in a remediation dashboard in the SOFY tool of a financial services company. Figure 10 shows an overview of what such a dashboard would look like. The left-hand blue-colored column highlights the KPI actuals, whereas the middle column indicates the target value, and the right-hand column defines the KPI and advises on the required remediation action.

C-2020-3-Vugt-10-klein

Figure 10. Remediation Dashboard of SOFY Access Control. [Click on the image for a larger image]

10 Aiming for a sustainable solution

Finally, when the analysis has been set up at the right level of detail, when it has been successfully automated and follow-up has been operationalized, one final aspect has to be taken into consideration. The subject of analysis (e.g. the IT environment analyzed for SoD conflicts) is subject to constant change. New functionality might be added, entire application systems might be phased out or introduced; this should all be reflected within the SoD analysis.

We recognize three different mechanisms to keep the reality reflected within the cross-system analysis:

  1. The ongoing process of change management applicable to IT environments. Typically, changes are generated by well-structured processes or projects (depending on the size of the change) and should be evaluated on SoD relevance. If relevant, these changes should be communicated from the change process to be included in the analysis.
  2. The second mechanism functions as a backup for the first and consists of a periodical review of all definitions applied in the SoD analysis. By periodically (for example yearly) distributing the current definitions which need to be conformed per application, it is ensured that any changes missed by applying the first mechanism are identified and that the definitions stay up to date.
  3. As definitions are not the only critical input to a successful SoD analysis, other elements such as the linkages between application users and employees needs to be maintained as well. When a new employee joins the organization and obtains a new user account, it needs to be (automatically) linked to the correct person. To include these prerequisites in the SoD KPIs reported, it is thereby enforced that these critical inputs are maintained during the entire year.

Conclusion

In case an organization with a complex IT environment encounters challenges relating to access rights and SoDs, it is advised to use a platform to support the analysis. Moreover, the right platform will provide an organization with the tools for maintenance of the policies and (technical) rules which the SoD analyses are based on. The analysis should reveal conflicts on SoD and critical access and includes information on the related users, applications, roles and access rights. This information can be used to either remediate or mitigate conflicts. To monitor SoDs in a structured manner, it is key to automate the analyses and follow up on time. With the described 10 lessons learned, some practical tips are provided for a head start in their approach on SoDs and effectively demonstrate compliance.

We would like to thank Dennis Hallemeesch and Nick Jenniskens for their contribution to this article.

Reference

[Gunt19] Günthardt, D., D. Hallemeesch & S. van der Giesen (2019). The lessons learned from did-do analytics on SAP. Compact 2019/1. Retrieved from: https://www.compact.nl/articles/the-lessons-learned-from-did-do-analytics-on-sap/

Exploring digital: empowering the Internal Control Function

The Internal Control Function, or second line of defense is a vital part of the organization tasked with devising and improving measures to prevent fraud, helping the company adhering to laws and regulations and improving the quality of internal financial reporting. The world is watching: companies, especially the larger ones have to adhere to global and local laws, expectations of the general public, shareholders, auditors, employees, the supply chain and other stakeholders. This internal and external pressure is causing the Internal Control Function to feel the urge to improve its way of operating. This article provides insight into different digitalization options to help improve the way the Internal Control Function operates, with the purpose of inspiring you to digitize your Internal Control Function too.

Introduction

The Internal Control Function (ICF) uses a wide set of controls to make sure business and compliance risks are prevented or dealt with in the benefit of the company’s well-being. Many of these controls are manual. Using largely manual controls means a great deal of effort, time and money. Not only are these controls more time consuming and costly, they also cannot absorb the increasing complexity of today’s business environments in time, leaving the company possibly exposed to risks.

Let’s see what happening in the ICF market domain. The recently published Governance risk & Compliance (GRC) survey by KPMG ([KPMG19]) was initiated to get a better insight into the maturity of GRC, the level of internal controls and the adoption of HANA with organizations running SAP within the EMA region. More than 40 large organizations running SAP have been asked to participate in this survey. Relevant conclusions for ICF:

  • Approximately 20% of these companies don’t have a centralized internal control repository
  • Approximately 50% of these companies have less than 10% of their controls automated
  • Approximately 70% of these companies identify control automation as a top priority
  • Approximately 50% of these companies want to reduce their control deficiencies

Following this survey, it seems that while the top priorities of companies include further automation and reducing control deficiencies, the actual number of companies relying heavily on automation, using digital solutions, is low. While the relevance of digitizing seems evident, it is difficult to start, having a large landscape of applications and different control options available to you as organization. A logical question to ask is: how can we start to digitize our ICF?

This article shares client stories of ICF that used digital options, simply tools, to improve the way they operate across a variety of industries. We will outline how digital options can be used to lower the cost of control and improve the level of assurance of four different control types and what pitfalls should be avoided. We will also share relevant lessons learned and next steps based on our own experiences.

Digitalization options for the Internal Control Function

Digitalization of the ICF can be achieved in different forms. Some organizations start by implementing a tool or system to centrally manage and govern their risk and control framework. Some choose to go for an end-to-end transformation, where various tools and systems are integrating with each other, controls are automated and manual activities are supported by robotic process automation and low-code platforms. In the end, all companies try to achieve the same goal: increase their level of assurance while decreasing their cost of control. To help reach this goal we analyze a number of digitalization options using the CARP model. This model helps to categorize the different control types that will effectively reduce risks within a process or process step. CARP stands for Configuration, Authorization, Reporting and Procedural (Manual), which represent different types of controls (see Figure 1).

C-2020-3-Giesen-01-klein

Figure 1. CARP model. [Click on the image for a larger image]

In Figure 1, the left side (C+A) of the model represents more technical controls, which can often be implemented directly in the ERP system, whereas the right side (R+P) of the model represents organizational controls which are embedded in daily business activities. Furthermore, configuration and authorization controls are preventive in nature while reporting and procedural controls are detective in nature. For each of the control types indicated in the model, there are digitalization opportunities. In the next section, some examples and use cases are provided for each category to provide a peek into the different options.

Configuration controls

Configuration controls are related to the system settings of an ERP system that can help prevent undesirable actions or force desirable actions in the system. These kinds of configurations can exist for the business processes handled in the ERP system as well as for the IT processes related to the ERP system. Therefore, a distinction can be made between “business configuration controls” also known as application controls and “system configuration controls”.

Examples business configuration controls
  • Mandatory fields, such as a bank account number; when creating a new vendor in the system, these settings make sure no critical information is missing.
  • Three-way match; this enforces that the purchase order, goods receipt and invoice document postings are matched (within the tolerance limits) with regard to quantity and amount.
Examples system configuration controls
  • Password settings such as SAP parameters: “login/min_password_lng” or “login_min_password_letters” determine system behavior, such as the length of a password or the minimum number of letters used.
  • More general system settings such as the number of rows that can be exported in a Microsoft Dynamics D365 environment determine a part of the system stability and are governed to make sure system performance is not impacted by frequent large exports.

In short, configuration controls are automated and preventive in nature which help organizations stay in control while not taking up any FTEs to execute the controls. Therefore, this type of control can be used to reduce the cost of control and increase assurance levels. However, there is a catch: how can the organization prove its automated configuration controls are set up correctly? And how does it prove that has been the case over time? To show how digital solutions can help solve this question, we present a use case of a large multinational where SAP Process Control was implemented to monitor the system configuration controls of 20 SAP systems.

Use case: using tools to go from quarterly parameter checks to continuous monitoring

Context

A large multinational with over 10 Billion euros in revenue. The company has over 20 centralized SAP systems. For each of these SAP systems, their (security) parameter settings, such as client logging (Rec/Client) or Password Settings (login/min_password_lng) needed to be monitored in order to adhere to their set SAP Security baseline. Their security baseline covers over 100 (security) parameter settings, which resulted in a lot of pressure on their testing resources.

State before Process Control

Before SAP Process Control was used, the 100+ security settings for each centralized SAP system were reviewed once per quarter. The review was performed manually and documented by creating screenshots of each relevant system setting. These documents were over 100 pages per system. The follow up on findings of these reviews were limited and rarely documented. If changes had occurred during the quarter (e.g. a setting was changed to an incorrect value and changed back to the correct value just before the review) there was no possibility to find these changes.

State after Process Control

By using continuous monitoring via SAP Process Control the system (security) parameters are now monitored on a weekly or monthly basis (dependent on the risk profile) and on top of that, all changes made to parameters are reported. Furthermore, the monitoring is now exception based. This means that parameters which are set to the correct values are passed and reported as effective whereas parameters that are set to the incorrect value are set to deficient and escalated through a workflow. The workflow requires a follow up action of the system owner, which is then captured in SAP Process Control.

Key benefits

By shifting the monitoring to SAP Process Control, the cost of control decreased while the assurance over control increased. By automating the parameter monitoring the focus shifted towards exceptions and follow up thereof. In the new situation, all results are also better auditable and more useful for the external auditor.

In this specific case, the client used SAP Process Control to perform continuous monitoring on their system configuration controls.

System authorization controls

Are preventative measures taken to control the content of technical “roles” and access for users to those roles with the intent of making sure the right people can execute the right actions? In their efforts to manage access, companies generally make use of the below authorization controls:

  • Segregation of duty controls. The ability to change vendor bank accounts is limited to a technical system role related to master data management. That role is only assigned to personnel in the master data management department, to people who are not directly processing transactions. Another role is limited to be able to do purchase orders. This limits the risk that one person can change a vendor bank account into a private account, and create a purchase order against it, for a fraudulent pay-out. The outcome is that certain activities or “duties” are segregated. Issues like this example are called Segregation of Duties (SoD) conflicts.

    [Vree06] zooms in on the relevance of Segregating of Duties and its impact, along with multiple improvement suggestions and [Zon13] dives into solutions for managing acces controls.
  • Sensitive Access controls. Updating credit management settings often falls under the Sensitive Access controls. These controls are essentially lists of actions that can have major impact on the business, and access to it should therefore be limited and closely monitored. Unlike SoDs, this is a single specific action. In the example of credit management, access to the transaction code and object in SAP and the Permission in Microsoft D365 are normally monitored periodically, where any users or roles having this access are screened and adjusted where needed.

In short, authorization controls are very similar to configuration controls because they are part of the system and once created and assigned, they automatically do their job. These controls are strong preventive controls if set up correctly. Like in the case of configuration controls, there’s a catch: how can the organization prove that the authorization controls are set up correctly, and how do you prove that has been the case over time? To show how digital solutions can help solve this question, we present a use case where a company used Access Control tooling to assist in managing their access controls and making sure their roles are SoD Free or SoD-mitigated in a way that increases assurance and lowers the cost of control.

Use case

Background Sofy

The Sofy platform is a KPMG proprietary SaaS platform, hosting solutions in areas where KPMG built expertise over the course of years. Solutions on the Sofy platform aim to provide relevant insights into business-critical processes as well as triggering relevant follow-up actions by end users with the help of workflows, tasks and notifications.

Context

A large multinational operates in more than 150 countries and has annual global revenues of over 50 Billion euros. The core application landscape of the customer consists of 9 SAP production systems. Access of all users to these systems are to be monitored to limit extensive conflicting access rights and trigger quick resolution of access rights related issues by the appropriate end users.

State before KPMG Sofy Access Control

The organization struggles to get a reliable view on Access Risks within and between their business critical applications. Their previous solution only looked at the SAP landscape and analyzed their production systems in an isolated way without taking into account that users may have conflicting access rights in the intersection of multiple systems. There is a strong desire, driven by Internal Control and findings from the external auditor to get better insights into conflicting authorizations within the full SAP development stack as well as other business critical applications. Issues often exist with users that have access to multiple business critical applications and as such can perform conflicting activities in multiple systems. With the existing solution, the company was unable to detect these issues.

State after KPMG Sofy Access Control

By implementing the Sofy Access Control solution:

  • transparency has been created within the complete SAP landscape.
  • a preventive SoD check is now running continuously for every access request
  • conflicting user authorizations resulting from role assignments are being reviewed and approved. This happens before they are actually assigned in the underlying system to prevent unauthorized access for end users.
  • conflicting user authorizations are being reviewed continuously to ensure accurate follow-up,  takes place in terms of risk acceptance, mitigation or remediation.

Key benefits

The solution helped the client gain control over their authorization controls because:

  • it increased transparency in conflicting access across the full SAP stack
  • continuous monitoring on each of these systems ensures quick resolution and remediation of access related risks.
  • preventive SoD checks make sure unauthorized access in the system is avoided as the impact of roles changes or role assignments is clear upfront

The implementation of this digital solution has shifted the minds from taking remedial actions in a re-active way to pro-actively avoiding and mitigating access related issues.

Reporting controls

Reporting controls are pieces of information combined into a format where a user can get to conclusions about for example the effectiveness of a process or the state of a financial. They are used to detect anomalies so that action can be initiated. Examples are:

  • Related to the SoD example mentioned under authorization controls, a manager in the internal control department wants to know how many SoD conflicts were reported last month, the actions that were taken to fix them, the status of those actions and the real risk the business is now facing. A dashboard, for example in BI tooling such as MS PowerBI, Tableau or Qlicksense or as part of a risk platform such as the earlier mentioned Sofy, SAP Process Control, can be a great tool to visualize and report on the status of the authorization controls. Especially on this topic, many lessons can be learned, and we highly recommend reading the 10 most valuable tips for analyzing actual SoD violations ([Günt19]).
  • A manager in finance running a report that checks the systems for duplicate invoices, so that double payments to vendors can be prevented or payment of duplicated invoices can be retrieved from vendors that were paid twice.

In short, reporting controls are generally detective in nature as they present information about something that already has occurred. While the Configuration and Authorization controls try to prevent risks, there is always a residual risk and this is where reporting controls come in, to detect any mistakes or fraudulent behavior that got past the preventative controls.

These reporting controls can be very strong when the executers of those controls are supported by strong dashboards and analytics. In the Compact special of January 2019, [Zuij19] presented a case on advanced duplicate invoice analysis. The article explains how they implemented smart digital tooling to create a duplicate invoice analysis at a major oil company. We advise reading this in-depth case, because it could provide helpful guidance on how reporting controls can be digitized to unlock the power to identify conclusions that were invisible or inaccessible before. This can directly increase the assurance level of this type of control because insight is provided that wasn’t there before, and at the same time, it decreases the cost of control, opening up the possibility to gain back any invoices that are paid twice. Additional examples are:

  • Automating the running and sending of reports using SAP Process Control
  • Creating an analysis to identify the use of discounts in the sales process using SAP HANA or SQL
  • Unlocking faster decision-making by providing the organization with real-time overviews of the state of internal controls. This can be achieved with a live Dashboard using Microsoft PowerBi in combination with Outsystems and SAP Process Control, or KPMG SOFY

Procedural (manual) controls

Procedural controls are manual actions performed by a human, initiated to prevent or detect anomalies in processes. They help companies cover residual risks that are not easily covered by Configuration, Authorization or Reporting controls. Examples of manual controls are:

  • Signing off documents such as contracts, large orders, etc.;
  • Manual reconciliation of received payments against invoices, with or without the use of a system;
  • Manual data alignment between the sales system and invoice system.

Controls executed by humans, such as Reporting and Procedural controls compared to ones that are executed by a machine, have the inherent risk of the human operator making mistakes, because making mistakes is human, especially when the complexity and receptiveness of a control increases. As business complexity and the volume of data is increasing, companies are now looking into solutions that can replace or enhance human-operated controls with automated digitized ones.

To show how manual controls can be improved using digital solutions, we present a use case focused on reducing manual actions or at least reducing the effort and increasing the quality of their output. In this case, Robotics Process Automation (RPA) tooling was used to automate manual journal entries, resulting in fewer manual control actions. This reduces the cost of control because fewer FTEs are required to operate the control. Secondly, the level of assurance increases as a robot will not make mistakes even when the repetitive task is executed hundreds of thousands of times.

Use case: automating manual journal entries at a large telecommunication organization

Context

During a large finance transformation at one of the biggest Dutch telecom companies, KPMG was asked to help identify opportunities for automation within the finance department. During an assessment at the client, the processing of manual journal entries was identified as a suitable process for automation with the use of Robotic Process Automation (RPA). This happened because of the high repetitive nature of the process and the high business case value, as the process is time consuming and error prone. To show the viability of RPA within the organization and the potential benefits for the client, a Proof of Concept was initiated.

State before using RPA tooling

  • Large finance team that is performing manual repetitive task daily.
  • Low first-time right percentage for manual journal entries which leads to rework
  • Inefficient input template for creation of manual journal entries
  • Multiple human control steps embedded in the process to check journal entries before recording which is time consuming
  • No clear visibility for management of prior recorded manual journal entries

State after the implementation

  • Standardized a manual journal entry template for the usage of RPA
  • Automated the booking of manual journal entries using RPA software
  • Eliminated unnecessary steps within the manual journal entry process

Key benefits

  • Higher first-time right percentage due to fewer errors performed in the process as a result of automating the process using RPA
  • One fully automated process which resulted in FTE reduction
  • Less human intervention necessary due to higher data quality caused by robotic input, which is more stable and less prone to error.
  • Automated reports generated can be used for a better audit trail and management reporting.

Lessons learned: digitizing the right way

Like other projects, digitalization projects can be challenging and will have pitfalls. In this section, we will provide examples of the pitfalls we encountered, and explore how they can be prevented.

Determine the baseline

In several cases the goal of a project is set without first analyzing the starting point of the project. This can result in unachievable goals, which will cause the project to be a failure. For example, if the goal of a digitization project is: “we would like to fully automate the testing for 50% of the controls in our central control framework using tool XYZ” there are several prerequisites to achieve that goal:

  1. Tool XYZ should be capable of automating the testing for these controls.
  2. The feasibility of automating the testing of controls in the central control framework should be determined beforehand.
  3. The end users should be part of the digitization journey to make sure they understand the tool and understand how it can be embedded in their process

In this example, setting the baseline would consist of analyzing the controls for feasibility of automation in general, then checking whether tool XYZ is capable to facilitate the aimed automation and then engaging the business users before the project starts.

Once the baseline is determined, an achievable goal can be set, and the project will have a higher chance of success.

A fool with a tool is still a fool

The market is full of tools and technology solutions, some with a very broad range of services, some with a specific focus. Each of these tools have strong points and weaknesses. These tools are often sold using accompanying success stories. However, even if the best tool is used in the wrong way, it won’t be successful.

As an example, we could consider SAP Access Control, a tool which can be used to monitor potential Segregation of Duties conflicts in an ERP system. When the tool reports SoD conflicts, an end user should follow up on the conflicts. In the case of SAP Access Control, a user has the ability to assign a control to an access risk to let the system know the risk has been mitigated. In reality, many users assign a control in the system to hide the results of SAP Access control by showing them as “mitigated”, while in reality the actual risk in the ERP system is still there as the control is not really executed. In this case, the tool is just as good because the end user decides to use it. Good examples of how to use this tool properly can be found in the article of van der Zon, Spruit and Schutte ([Zon13]).

To ensure that a tool or solution is used in the right way, make sure that the end users are involved and properly trained in the tool or solution. If they see the benefits, the adoption will be easier and the tool or solution can be used to the full extend, moving towards a more digitized organization.

Who controls the robot?

Governance is an important topic in relation to technology solutions. In a landscape where controls are tested automatically, reports are being generated by a data analytics platform and manual tasks are performed by robots there is still manual intervention of humans.

The automated testing of controls needs to be configured in the tool or solution. As part of the implementation project this is probably tested and reviewed, but what happens after that? How do organizations make sure that nobody changes the rule-based setup for the automated testing of the controls? This question is relevant for every tool or technology solution used for digitization of processes. If there are separated robots to perform conflicting activities within a process, but both robots can be accessed by the same person, the conflict and underlying risks still exist. To resolve this, proper governance of the technology solution or tool should be put into place.

Working together

In larger corporations, each department might have their own budget and their own wishes and requirements. However, if each department is working on digitizing individually, a lot of effort is wasted. Re-inventing the wheel is costly and will slow down overall progress.

When all powers are combined, requirements are bundled and effort is centralized, digitizing the processes will make more sense and implementation can be faster and cheaper. It’s about connecting individual digitizing efforts to achieve the next level.

Conclusion

The digitalization of internal control entails more than selecting and implementing a new tool and learning how to use it; it is the use of digital technologies to change the way the business or a department works and provides new value-producing opportunities for the company. Onboarding new tooling will therefore require enhancements on your operating model to be set for success.

Different aspects of the operating model need to be considered. Think of the potential impact on the ICF when automation impacts the role between the business and internal control. People and skills are impacted when internal control takes a role in the configuration or maintenance of automation rules, requiring certain technical capabilities and skillsets. In today’s digitalization, a more agile way of working is usually better, potentially impacting the required capabilities of the internal controllers. From technology perspective, automation has a major impact because of the integration within or connection with the existing IT landscape. This is even more so the case when RPA is used, impacting aspects such as governance, maintainability and security. Finally, automation within the internal control realm will have an impact on the current way of reporting, also considering auditability.

Taking the time and effort to define the impact on the operating model of your ICF and to devise a detailed plan on how to use digital control options, is the key to success.

Acknowledgements

The authors would like to thank Sebastiaan Tiemens, Martin Boon, Robert Sweijen and Geert Dekker for their support in providing use cases, feedback and co-reading the article.

References

[Cool18] Coolen, J., Bos, V., de Koning, T. & Koot, W. (2018). Agile transformation of the (IT) Operating Model. Compact 2018/1. Retrieved from: https://www.compact.nl/articles/agile-transformation-of-the-it-operating-model/

[Günt19] Günthardt, D., Hallemeesch, D. & van der Giesen, S. (2019). The lessons learned from did-do analytics on SAP. Compact 2019/1. Retrieved from: https://www.compact.nl/articles/the-lessons-learned-from-did-do-analytics-on-sap/?highlight=The%20lessons

[KPMG19] KPMG (2019, May). Survey – Governance, Risk and Compliance. Retrieved from: https://assets.kpmg/content/dam/kpmg/ch/pdf/results-grc-survey-2019.pdf

[Vree06] Vreeke, A. & Hallemeesch, D. (2006). ‘Zoveel functiescheidingsconflicten in SAP – dat kan nooit’, en waarom is dat eigenlijk een risico? Compact 2006/2. Retrieved from: https://www.compact.nl/articles/zoveel-functiescheidingsconflicten-in-sap-dat-kan-nooit-en-waarom-is-dat-eigenlijk-een-risico/?highlight=hallemeesch

[Zon13] Van der Zon, A., Spruit, I. & Schutte, J. (2013). Access Control applicaties voor SAP. Compact 2013/3. Retrieved from: https://www.compact.nl/articles/access-control-applicaties-voor-sap/

[Zuij19] Zuijderwijk, S. & van der Giesen, S. (2019). Advanced duplicate invoice analysis case. Compact 2019/1. Retrieved from: https://www.compact.nl/articles/advanced-duplicate-invoice-analysis-case/?highlight=Advanced

Transaction monitoring model validation

The bar for transaction monitoring by financial institutions has been raised during the past decade. Recently, several banks have been confronted with high fines relating to insufficient and ineffective transaction monitoring. There is an increasing number of regulators that expect (mainly) banks to perform self-attestations with respect to their transaction monitoring models. This is, however, a complex exercise with many challenges and pitfalls. This article aims to provide some guidance regarding the approach and methods.

Introduction

Many people consider financial crime like money laundering as a crime without real victims. Perhaps a large company loses money, or the government gets fewer taxes, but nobody really suffers true harm. Sadly, this is far from the truth. From human trafficking, to drug wars and child labor, the human cost of financial crime is very real and substantial. Financial crime is therefore considered to be a major problem by governments around the world. As a consequence, increasingly strict regulations regarding transaction monitoring where imposed on the financial industry since the beginning of the financial crisis as they are the gatekeepers to the financial system. These regulations have predominantly, although not exclusively, an effect on banks. Financial institutions are increasingly confronted with complex compliance-related challenges and struggle to keep up with the development of regulatory requirements. This especially applies for financial institutions that operate on a global level and that are using legacy systems. The penalties of non-compliance are severe as demonstrated by amongst others UBS with a fine of 5,1 billion (USD) and a case in The Netherlands where ING Bank settled for €775 million with the public prosecutor. As time progresses, the bar for financial institutions is being raised even higher.

In 2017, the New York State Department of Financial Services (NYDFS) part 504 rule became effective. The NYDFS part 504 requires – starting in 2018 – that the board of directors or senior officers annually sign off on the effectiveness of the transaction monitoring and filtering processes, and a remediation program for deficiencies regarding internal controls. The nature of the NYDFS part 504 rule is similar to that of the SOx act. This seems to be a next step in transaction monitoring regulatory compliance requirements. For example, the Monetary Authority of Singapore has increased its focus both on anti-money laundering compliance as well as independent validation of models. In the Netherlands, De Nederlandsche Bank (DNB) as supervisory authority has issued a guideline in December 2019 ([DNB19]) regarding, for now, voluntarily model validation with respect to transaction monitoring.

Given the increased attention for transaction monitoring and model validation (self-attestation), this article zooms in on the way model validations for transaction monitoring can be approached. The next section contains an overview of the compliance framework for transaction monitoring after which the common pitfalls and challenges for model validations are discussed. The five-pillar approach of KPMG enables financial institutions to cope with these pitfalls and challenges and is also explained as an outlook to the near future regarding transaction monitoring and technologies for model validation. Finally, a conclusion is provided.

High-level transaction monitoring process

C-2020-2-Schijndel-01-klein

Figure 1. High-level overview of the transaction monitoring process ([DNB17]). [Click on the image for a larger image]

Before discussing model validation in more detail, it might be helpful to provide a high-level overview of the transaction monitoring process, as an example of a compliance model. Figure 1 contains a graphical high-level overview. The SIRA (Systematic Integrity Risk Analyses) and the transaction monitoring governance are at the basis of the process. When transactions are initiated, pre-transaction monitoring activities are triggered (e.g. with respect to physical contact with the client, relating to trade finance or Sanctions) based on business rules. This might result in alerts which are followed up in accordance with the governance and escalation procedures.

Inbound and outbound transactions (“R.C. Mutations”) are processed after which post-transaction monitoring activities are triggered based on business rules, again resulting in potential alerts which are followed up and reported to the FIU if required (Financial Intelligence Unit, a supervisory authority regarding money laundering and terrorism financing).

Parallel to daily activities a data-driven learning and improvement cycle is in place in order to decrease false positive and false negatives alerts and to increase efficiency.

Compliance framework

Banks and other financial institutions use a multitude of models to perform quantitative analyses like credit risk modeling. As a response to the increased reliance on such models, different regulators as well as other (inter)national organizations have issued regulations and guidance in relation to sound model governance and model validation.

Within the compliance domain we see an increasing reliance on compliance models, like transaction monitoring systems, client risk scoring models or sanction screening solutions. These models are used to ensure compliance with laws and regulations related to, among other, money laundering, terrorism financing and sanctions. Where these models are intended to mitigate specific integrity related risks like the facilitation of payments related to terrorism, the usage of such models introduce model risk. And if not handled well, can result in an unjustified reliance on the model. Therefore, the model-related guidance, either specifically related to the compliance domain or more general, is equally relevant for compliance models. Examples include Bulletin OCC 11-12 from the Federal Reserve and Office of the Controller of the Currency or the Guidance for effective AML/CFT Transaction Monitoring Controls by the Monetary Authority of Singapore. The DNB has presented guidance on the post-event transaction monitoring process for banks on how to set up an adequate transaction monitoring model and related processes, including a solid Three Lines of Defense.

Internationally, different regulators have not only issued guidance in relation to model risk and sound model governance. They have additionally introduced or are requesting reviews, examinations and even mandatory periodical attestations by the board of directors or senior officers to ensure that compliance models are working as intended and that financial institutions are in control of these models. For example, the New York Department of Financial Services (NYDFS) requires senior officers to file an annual certification attesting to compliance with the NYDFS regulations that describe the minimal requirements for transaction monitoring and filtering programs. The DNB for instance has stated, in the updated guidance on the Wwft and SW, that both the quality and effectiveness of e.g. a transaction monitoring system must be demonstrated and that by carrying out a model validation or (internal) audit, an institute can adequately demonstrate the quality and effectiveness of such a model.

In our experience, banks are increasingly considering compliance models to be in scope of the regular internal model validations processes that already being performed for more financially related models, requiring sign-off by internal model validation departments prior to implementation and/or as part of ongoing validation of e.g. transaction monitoring systems. Additionally, compliance departments as well as internal audit departments are paying more attention to the internal mechanics of compliance models rather than looking at merely the output of the model (e.g. generated alerts and the subsequent handling). Especially due to recent regulatory enforcements within the EU and specifically the Netherlands, we have seen the topic of compliance model validation being more and more part of the agenda of senior management and the board, and banks that are allocating more resources to compliance models.

Given the increased awareness by both external parties such as regulators, as well as internal parties at financial institutions, these models introduce new risk management challenges. Simply put; how do we know and show that these compliance models are functioning as intended?

Issues, pitfalls and challenges

Effectively managing model risk comes with several issues, pitfalls and challenges. Some of these are part of the overall model risk management (MRM) process and other relate more specifically to compliance models. We have also seen recurring observations, findings or deficiencies in models that can impact both the efficiency and effectiveness of the models. This section describes some of these challenges and deficiencies so that when designing, implementing and operating compliance models or when validating or reviewing such models, these can be considered upfront.

KPMG has conducted a study to identify key issues facing model developers and model risk managers. This study, which is not specifically focused on compliance models, shows that key issues include that the definition of a model is subjective or even obscure and that the dividing line between a model and a more simpler computational tool – like an extensive spreadsheet – keeps shifting towards including more and more tools as a model. In addition, creating a consistent risk rating to apply to both models as well as model-related findings, is considered difficult, making it difficult, if not impossible, to quantify individual model risk as well as the organization’s aggregate model risk. Other key issues include not having a consistent IT platform, inefficient processes and the difficulty in fostering an MRM culture.

More specifically, compliance models may have certain challenges that can be extra time-consuming or painful. For many financial institutes, the current validation of e.g. a transaction monitoring system is a first-time exercise. This means that the setup and overhead costs are high when organizations recognize that certain crucial elements are not adequately documented or dispersed across the organization, making the start of a full-scale validation difficult. Perhaps there isn’t even sufficient insight into all the relevant source systems and data flows that impact the models.

From an ownership perspective more and more activities related to compliance models, which historically have been more managed by compliance departments, are being introduced to the first line of defense. This means that e.g. certain historical choices have been made and are unknown by the current system owners when such choices have not been historically documented.

For financial institutes that have activities in multiple countries, the lack of a uniform regulatory framework means that incorporating all relevant global and local requirements can be challenging. Even within the EU, although minimal requirements are similar, certain requirements, like what constitutes sufficient verification or what are mandatory sanctions lists, may differ per country. Outside the EU, even more distinct requirements might be relevant. What is sufficient in one jurisdiction might be insufficient or even unacceptable in another.

Because compliance models, due to regulatory pressure, are getting more resources to improve and upscale current activities, models are less static than before and become a moving target with frequent changes to data feeds, scenario logic, system functionality or even complete migrations or overhauls of current models. In addition, increased staffing in relation to compliance models, means many new employees don’t have the historical knowledge of the models and we also see difficulties in the market when recruiting and retaining sufficient and skilled people.

An inherent element of such compliance models, similar to e.g. fraud-related models, is the lack of an extrinsic performance metric to determine the success or sufficient working of the model. Transaction monitoring systems or sanction screening solutions currently have a high “false positive” rate of alerts of sometimes as high a 99%. When banks report unusual or suspicious transactions they generally lack the feedback from regulatory organizations to determine if what they are reporting is indeed valuable (i.e. being a true positive). Furthermore, for all transactions that are not reported, banks do not know if these are indeed true negatives or that these perhaps still relate to money laundering. This uncertainty makes it very difficult to objectively score model performance compared to more quantitative models that are used to e.g. estimate the risk of defaulting on a loan.

All these elements make the validation of these compliance models a major challenge, something financial institutes are confronted with.

When financial institutions actually conduct a model validation or when internal or external reviews or examinations are conducted, this can result in findings like model deficiencies. Based on public sources and supplemented with KPMG’s experience, certain recurring of common compliance model deficiencies resulting from validations or examinations are ([Al-Ra15], [OM18]):

  • Monitoring not being applied at the correct level of granularity. E.g. monitoring individual accounts instead of the aggregate behavior of customers, entities or ultimate beneficial owners or monitoring being done across various separated systems;
  • Application of different character encodings which are not completely compatible or inadvertently applying case-sensitive matching of terms and names;
  • Applying capacity-based tuning and system configurations instead of a setup commensurate with the risk appetite of the organization;
  • Programming errors or fundamental logic errors resulting in unintended results1;
  • A lack of detailed documentation, appropriate resources and expertise and/or unclear roles and responsibilities to effectively manage and support model risk management activities;
  • A conceptual design that is inconsistent with the unique integrity risks of an organization and minimal regulatory expectations;
  • Insufficient risk-based model controls to ensure consistent workings of the system;
  • Issues related to data quality like the incomplete, inaccurate or untimely transfer of transactional or client data between systems that feed into the compliance model.

Model risk management is a process wherein institutes should be able to demonstrate to, among other, regulators that their compliance models work as expected and that the model risk aligns to the risk appetite of the bank. Therefore, both the challenges and common model deficiencies mentioned in this section are relevant to consider when commencing a model validation.

Five-pillar approach: an approach for transaction monitoring model validation

When discussing model validation, it is helpful to elaborate on the foundations first. For more statistical models, the task of model validation is to confirm whether the output of a model is within an acceptable range of real-world values to fit the intended purpose. Looking at compliance models, model validation is intended to verify that models are performing as expected, to validate whether the model is in line with the intended purpose and design objectives, and business uses. The validation is also used to identify potential limitations and test assumptions and assesses their potential impact.

During the validation it is substantiated that the model, within its domain of applicability, processes a satisfactory range of accuracy consistent with the intended application of the model and the validness of the assumptions underlying it.

To validate models, an approach is required. Whereas for certain statistical or predictive models there are a lot of well-established techniques, for compliance models this is less the case; the validation approach is highly dependent on the model, type of model and system being used and the validation of compliance models is a relatively new domain. KPMG has developed an approach consisting of five interrelated pillars. The approach has been successfully used globally for both international banks as well as smaller institutions and has evolved based on global and local practice experience.

KPMG’s global model validation approach

The KPMG approach for transaction monitoring model validation consists of five pillars:

  1. Governance
  2. Conceptual Soundness
  3. Data, System & Process Validation
  4. Ongoing & Effective Challenge
  5. Outcomes Analysis & Reporting
Governance

For an effective model, not only the technical model but also its governance, is a prerequisite to success. The governance framework related to the model needs to be reviewed. This review should include policies and procedures, roles and responsibilities, resources and training for comparison against existing authoritative standards for compliance and controls programs as well as industry leading practices and experiences with comparative institutions. This is predominantly done by conducting interviews with stakeholders based on structured questionnaires and documentation review.

Conceptual Soundness

The foundation of any compliance model is its conceptual design. Therefore, an assessment is required regarding the quality of the model design and development in order to ensure that the design criteria used in model design follow sound regulatory requirements and industry practice. In addition, key actions include a review of the risk evaluation, rules/settings assessment and the assessment of developmental evidence and supporting analysis.

Data, System & Process Validation

A (conceptual) design of a model generally gets implemented into an information system which requires (input) data to function and has processes that govern aspects of the system regarding, for example, change management. This pillar of the validation approach has three main types of activities that differ depending on the exact model and system being used:

  • The first type of activity involves performing a number of tests to assess whether data feeds and information from ancillary systems are appropriately integrated into the models. Preferably this is done from and end-to-end perspective (from data creation to processing to retention).
  • The second activity involves testing the system to assess its core functionality is working as intended. For example, for a Transaction Monitoring system, rules may be (independently) replicated based on documentation to determine if they are implemented and working as designed. Additional or alternative tests, depending on the model, can be considered, such as control structure testing or code review.
  • The third and final component involves reviewing the processes that govern the use of the system.
Ongoing & Effective Challenge

A model’s effectiveness needs to be assessed on an ongoing basis to ensure that changes in products, customers, risks, data inputs/outputs, and regulatory environment do (not) necessitate adjustment, redevelopment, or replacement of the model or whether model limitations and assumptions are still appropriate. Key activities of this pillar include ongoing verification, sensitivity testing (safeguarding the balance between the number of alerts and missing false negatives), performance tuning, and quantitative and qualitative benchmarking with peer organizations.

Outcomes Analysis & Reporting

Outcomes analysis compares and evaluates prospective scenario or rule changes, scoring, or alerting process changes to historical outcomes. This way, opportunities for efficiency improvements or to substantial parameter changes that may exist are identified. The key activities of this component include outcomes analysis and reporting.

As the validation of compliance models is a relatively new domain, model validations are struggling with the required level of depth needed to do an adequate validation without going beyond what is required. The use of a global methodology allows for a consistent and structured approach and way-of-working with both the benefit of consistency throughout time, locations and different institutions as well as having an approach that maps back to regulatory guidance. This methodology needs to be able to cope with regulatory compliance differences per jurisdiction.

Transaction monitoring outlook

Enhanced compliance frameworks, digitalization and globalization causes transaction monitoring to become more and more intricate. In addition, due to growing polarization between certain countries, also the sanctions regimes are increasingly complex. How can organizations tackle these issues?

As a consequence of the digitalization, the availability of unstructured data has also increased significantly over the last years. It is therefore no surprise that applying artificial intelligence (AI) and machine learning (ML) for models is advancing rapidly. Also financial institutions are having their initial experiences in using AI/ML in reducing both false positives to increase efficiency as well as trying to detect the, previously unknown, false negatives to increase effectiveness. This is happening while they are also trying to reduce, or at least control, the costs of monitoring.

When looking from a validation perspective, however, there are some attention points when using AI and ML for compliance models. The first attention point is the knowledge and experience of the model developers with AI and ML. Due to the complexity, it is hard to master AI and ML and as a consequence reach conceptual soundness if these techniques are used. In addition, there is the risk that the model becomes a black box which is only understood by a few staff members. That way key man risk regarding the model potentially becomes an issue as well. This makes the model less transparent. The complexity of applying AI and ML to large volumes of data for modelling makes it hard to ensure the integrity and unbiasedness of data. By using integrity validation rules to collect data, the model can become biased as a consequence of decision making by the developers.

In the author’s opinion challenges as mentioned above should not withhold financial institutions to selectively apply AI and ML. But they do require extra attention for regular model validation, develop AI and ML capabilities within the organization and enhancing the risk culture. For financial institutions that are still in the beginning of their AI and ML journey, it might be interesting to start applying it for the challenger model in the validation process of the current transaction monitoring model.

Another interesting development in the field of transaction monitoring is that on 19 September 2019, the Dutch Bank Association announced that five Dutch Banks (ABN AMRO, ING, Rabobank, Triodos and Volksbank) will join forces and set up an entity for transaction monitoring: Transaction Monitoring Netherlands (TMNL). Other banks can join at a later stage. This does however require a change to existing legislation (competition related). It will be interesting to follow this development and to see whether new entrants may also join this initiative. In addition, it is the question whether the business case can be realized, since this TMNL only monitors domestic traffic. It will be interesting to see whether similar initiatives will be launched elsewhere in the EU.

Conclusion

Regulatory requirements regarding financial crime are making it increasingly complex for financial institutions to become and stay compliant with respect to transaction monitoring. Having a model for transaction monitoring is not sufficient anymore. Regulators are increasingly expecting financial institutions to be able to demonstrate effectiveness of transaction monitoring and in the process of doing so, to validate their models. Certainly for financial institutions that operate internationally, this has proven to be quite a (costly) challenge. The best way to validate a model is to start with a broad perspective and include processes and activities that surround the model as well. The five pillars cover the required areas for model validation. However, there is no one single way of validating a model. The focus regarding the five pillars depends on the nature of the model. AI and ML can be utilized for both the model and to serve as a challenger model. However, in practice the application of AI and ML also creates challenges and potential issues. Collaborating with FinTechs or join forces as financial institutions might be a key with respect to ensuring compliance and keeping the cost base at an acceptable level.

Notes

  1. An example of a logical error, or undocumented limitation, can be that a system is configured to detect specific behavior or transactional activity within a week period instead of a 7-day period. I.e. when a certain combination of transactions occurs on a Monday till Wednesday, this generates an alert, whereas when the exact same behavior occurs on a Saturday till Monday nothing is detected due to the system setup instead of due to a deliberate design of the logic.

References

[Al-Ra15] Al-Rabea, R. (2015, September). The Challenge of AML Models Validation. Retrieved from: http://files.acams.org/pdfs/2016/The_Challenge_of_AML_Model_Validations_R_Al_Rabea_Updated.pdf

[DNB17] De Nederlandsche Bank (2017). Post-event transaction monitoring process for banks.

[DNB19] De Nederlandsche Bank (2019, December). Leidraad Wwft en SW.

[OM18] Openbaar Ministerie (2018). Onderzoek Houston: het strafrechtelijk onderzoek naar ING Bank N.V. Feitenrelaas en Beoordeling Openbaar Ministerie. Retrieved from: https://www.fiod.nl/wp-content/uploads/2018/09/feitenrelaas_houston.pdf

PSD2 risks and IT controls to mitigate

With the introduction of the second Payment Service Directive (PSD2), some new IT risks evolved from the regulation which has a direct impact on all the payment service providers such as banks, payment gateways and acquirers and payment service users such as individuals, organizations and governmental bodies. When looking at the risks and controls for the legislation, we saw organizations struggle with the right information about the IT-related risks and the necessary steps for the mitigation of those risks. This article will provide an overview of which IT controls should be in place for the payment service providers and users in order to be risk tolerant.

Introduction

PSD2 is the new European Directive on consumer and business payments. With the introduction of PSD2, new providers of new payment and account information services will enter the market. They will act as an online third party between you and your bank. These third parties – also known as Third Party Payment Providers – can be other banks, for example, or FinTech companies. The PSD2 brings two major changes to the payments industry: it mandates stronger security requirements for online transactions through multi-factor authentication and it forces banks and other financial institutions to give third-party payment services providers access to customer bank accounts if account holders give their consent.

There are many opportunities and advantages that will be introduced by the second Payment Services Directive (PSD2), such as increase protection of payment service users through increase security requirements and the opportunity for new services, based on account information and payment initiation capabilities. Unfortunately, with the new regulation and new opportunities and advantages, new risks will also be introduced. The risks include both operational and third-party risks and must be managed effectively. Banks and Third Party Payment Providers (TPPs) will experience significant growth in the volume of their business to business (B2B) network connections and traffic, and a growth in exposure of core banking functions driving up enterprise risk. In addition, by mandating banks to do business with TPPs, they will soon face the challenge of how to aggregate and understand risk from potentially dozens to hundreds of TPPs. The question is, is this practice safe for your security and compliance program? Alternatively, if it’s not safe, which controls could be applied to your product team to mitigate the risks? ([Blea18])

In this article, we will present the related risks arising from PSD2 for four parties: banks, customers, TPPs and supervisors, with a focus on IT. We will also explain how to mitigate these risks, followed by some controls and best practices.

Background

PSD2 forces banks of the Netherlands and Europe to share data with licensed organizations and execute a payment through initiation of payment services. Transaction data can be shared – this includes how consumers spend their money whether with cash or through credit, including information on loans of customers. One of the key elements of PSD2 is introduction of Access to Accounts (XS2A) via TPPs. Banks and other financial institution must give certain licensed third parties access to account information and cannot treat payments that go through Third-party Service Providers differently. Once a customer has given their explicit consent to have their data shared, this is most commonly done through a trusted API that requires strong customer authentication ([Mant18]). Open banking is the use of open APIs that share financial information with providers in a secure way. Open banking API means that the customers’ information stored in banks will no longer be “proprietary” and will finally belong to the account owners, not to the banks keeping those accounts.

C-2020-2-Oechslin-01-klein

Figure 1. PSD2 stakeholder relationships. [Click on the image for a larger image]

PSD2 is the second Payment Services Directive and is applicable to the countries of the European Economic Area (EEA). The PSD2 directive aims to establish legal clarity and create a level playing field in the payments sector in order to promote competition in the payments network, efficiency and innovation. Furthermore, higher security standards will be introduced to protect consumers and customers with online payments.

The scope of PSD2 relates to any party that offers payment services for example FinTechs, innovative new providers, also referred to as TPPs and tech giants such as GAFAMs (Google, Apple, Facebook, Amazon, Microsoft). Payment services include account information services and payment initiation services. TPPs include payment initiation services providers (PISPs) that initiate payments on behalf of customers and aggregators and account information service providers (AISPs) that give an overview of customer accounts and balances ([EuCo17]).

In order to comply with PSD2, certain Regulatory Technical Standards (RTS) will have to be complied with, specifically Strong Customer Authentication (SCA) and Secure Communication. SCA allows payments that are being made to be more secure through enhanced levels of authentication that are required when completing a transaction. There are, however, some exceptions to this rule, including but not limited to, low value and recurring transactions ([Adye18]). Due to the complexity of the requirements The European Banking Authority (EBA) has advised an opinion that the deadline for SCA for online card transactions should be postponed to 31 December 2020 ([EBA19a]) and National Competent Authorities adhered to this as service providers, mainly retailers and PSPs, experienced implementation challenges over Europe. Riskified, a global payment service provider facilitator, performed a survey that included participants from UK, Germany, France and Spain and noted that nine out of ten retailers (88%) believe consumers are ‘somewhat’ or ‘very aware’ of PSD2. However, more than three-quarters (76%) of consumers report that they haven’t even heard of PSD2, showing an imbalance between retailers and online consumers awareness ([Sing19]). Over the last few years, EU Member States have integrated PSD2 into local legislation in order to issue, among other, TPP licenses. The total number of TPPs in the EEA is 326 with the UK leading with 158.

C-2020-2-Oechslin-02-klein

Figure 2. TPPs in EEA in 2020 ([OBE20]). [Click on the image for a larger image]

In order to ensure that payment providers adhere to these regulations, supervisors have been tasked with monitoring compliance with PSD2, in certain cases in conjunction with one another. In the Netherlands, supervision is being shared between four supervisors: the Dutch Bank (DNB) that is focused on authorizations, providing licenses to payment service providers as well as being the prudential supervisor, the Dutch Data Protection Authority (AP) that focuses on the protection of personal data in PSD2, the Dutch Authority for Financial Markets (AFM) that is focused on behavioral or conduct supervision on payment services providers, and the Dutch Authority Consumer and Market (ACM) that is focused on the competition between payment service providers ([McIn20]).

The current payment landscape brings certain risks with it. We will present the related risks arising from PSD2 for privacy, security and transaction fraud risk from the perspective of the four parties: banks, customers, payment service providers and supervisors, with a focus on IT.

Risks

Privacy

PSD2 will allow customers to share their banking information and “sensitive personal data” with parties other than their bank. This raises questions about the privacy of customer data and whether the movement of customer data can be traced; it is clear who has access to what data. Banks and PSPs will accumulate customer data and eventually process the data and should therefore be aware of the risks relating to retention and processing of data as well as complying with legislation such as the General Data Protection Regulation (GDPR).

Companies with a license as a PSP can access payment data from account holders. However, once information from customers is obtained, these payment institutions need to protect the information, otherwise they are at risk for getting major fines of €20 million or 4% of global sales ([Hoor17]). For a new PSP that enters the market, suffering the reputational damage if customers are not comfortable with how their data is being used or feel that their data has been “stolen” through unclear agreements could be detrimental to their success.

Banks need to consider the impact of the interplay between PSD2 and GDPR as requirements may be conflicting. Banks will share banking information with relevant TPPs that have a PSD2 license, however, due to the GDPR that came into play on 25 May 2018, it is also the responsibility of banks to protect their customer data that they are obliged to share. If banks do not share data, competition authorities may intervene. Furthermore, only data should be shared for which explicit and current consent has been provided to avoid unauthorized data being shared and the reputational damage that will follow ([Benn18]).

Customers need to give explicit permission to the relevant financial institution, or PSP is allowed to access their payment information. The risk for the user is that they give permission too easily and too quickly without considering the consequences ([Hoor17]). Customers cannot limit the amount of information that is shared with the PSP, all payment account information is shared whether it is relevant to the payment service or not. This is a risk if the TPP uses customer data beyond its primary responsibility or if data is stolen, because the banking history of customers could reveal information of other parties through combined customer information and buying trends, such as spending rates at specific retail institutions. Therefore, customers may consent to sharing both their banking information as well as “sensitive personal data”, such as purchasing history revealing habits or perhaps sensitive purchases, that may not even be relevant to the PSP and has the risk of ending up in the wrong hands ([PrFi19]).

It is expected that supervisors monitor players in the payment landscape to ensure a safe and fair payment environment. The compliance set out by regulators needs to be adhered to in order to ensure this (see also the KPMG Point of View paper [KPMG20], which focuses on potential regulatory compliance risks arising from payments innovation). The major risk faced by the regulators and supervisors is loss of visibility of different players, and that players are no longer held accountable for their actions, their customers and their payment information ([McIn20]).

Security

PSD2 brings potential threats, such as security risk in sharing data with third-party payment providers, risk of fraud in case of dishonorable third-party payment providers, or hacked customers and requests made via TPPs that may be susceptible to third party fraud powered by malware or social engineering techniques, and fraudsters could use the TPPs as an obfuscation layer to confuse the banks’ fraud defenses. It is therefore important that TPPs are able to cope up with the security threats and mitigate such risks. By nature, Fintech firms – new PISPs and AISPs – have little reputation at stake. This means they may be inclined to take riskier business decisions, or even involve themselves in misleading business activities.

First, banks must train their IT systems to cope with potential cyberattacks. It’s helpful to think of an API as a set of doorways, and behind every door is a different set of data: account balances, transaction history, the ability to make deposits/withdrawals along with other customer information. In an ideal system, these APIs (or doorways) would only be accessible to trusted parties with your knowledge of their access. However, banks have always been a target of criminal activity, and it’s not hard for anyone to imagine that there are those out there waiting to abuse these new access points to bank data ([EBA19b]). To eliminate cyber-attacks from hackers, robust authentication barriers need to be in place. For the banks there needs to be a concrete clarification that the PISP is who she says she is. The way banks manage this will be crucial for investors as well as depositors. If banks are unable to develop sound API infrastructure to become reliable Account Servicing Payment Service Providers (ASPSPs), their market share will be lost to FinTech firms.

Customers making use of the services offered by TPPs under PSD2 need to be aware of the security risks. Whereas customers placed their faith in decades-old institutions with a long history of security, they will now be transferring that same trust to lesser-known third-party providers that don’t have a long track record of combating fraud. Antifraud systems of banks will have less data input to train computer models and spot fraud in real time as their customers’ financial data will spread across multiple companies. While customers are now more aware of phishing techniques that cybercriminals used in the past, malicious actors will get new opportunities to trick banking customers. Cybercriminals could pretend to be the FinTech companies working with banks, and new phishing schemes are expected to emerge.

Additionally, the regulators are yet to establish effective methods of monitoring for the increasing number of smaller but significant players. This could reduce overall levels of compliance and make the market vulnerable to money launderers and fraudsters ([Hask]).

Transaction fraud risk

The market changes that we anticipate as a result of PSD2 will likely create new opportunities for fraud because banks will be required to open up their infrastructure and allow third-party providers access to customer account information. This will impact the visibility of banks when it comes to end-to-end transaction monitoring and will inevitably affect their ability to prevent and detect fraudulent transactions.

While the objective is to allow innovation and development of the payment services industry, the growing concern is that this provides criminals with possibilities to commit fraud and to launder money.

Three key fraud risks, as highlighted by the Anti Money Laundering Center (AMLC) ([Lamm17]) are:

  • potentially unreliable and criminal TPPs,
  • reduced fraud detection, and
  • misuse and phishing of data.

The first risk to consider is that of potentially unreliable and criminal TPPs. The entry into force of the PSD2 may lead to an increase in both local and foreign TPPs that are active on the Dutch payment market. If direct access is allowed, the ASPSP is also unable to verify if the TPP actually executes the transaction in accordance with the wishes of the payment service user. Furthermore, malicious persons who aim to commit large-scale (identity) fraud, can set up a TPP themselves to facilitate fraudulent payments. Customers may interact with these fraudulent TPPs e.g. by entering their details on fake websites or mobile payment apps. The criminal can then use this information to access information about the customer and/or make payments in the name of this customer.

The second key risk to consider is that of reduced fraud detection. PSD2 opens the payment market for new entrants who may not have gained any experience with compliance and fraud detection yet. There is a growing trend to accelerate payment transactions via instant payments, which also makes an accelerated cash-out possible. As the risk that fraud transactions are conducted successfully increases, so does the importance of adequate fraud detection in relation to ASPSPs. Traditional financial organizations have so far enjoyed a bilateral relationship with their customers, which will change as TPPs enter the market with new services. PSD2 is bringing higher transaction volume for banks, and more demand from customers for mobile payments and quicker transactions. Those increases result in more pressure being put on fraud detection systems — which, in turn, provide obvious opportunity for businesses that sell fraud prevention technology. The window for investigations will be significantly reduced and banks will need to rely on automation and advanced analytics to mitigate the increased fraud risk ([PYMN18]).

The third risk to consider is the misuse and phishing of data. As outlined above, TPPs may be used as a way to unethically obtain confidential information, which could then be used to facilitate fraudulent transactions. For example, with PSD2 and the dynamic linking of authentication codes to the payment transaction details for remote transactions, phishing of authentication codes may become redundant while the phishing of activation codes for mobile payment /authentication apps could become the new target ([EPC19]).

While the introduction of PSD2 facilitates the innovation of the payments sector, it poses key privacy, security and transaction fraud risks. The next section explores the considerations concerning mitigating these risks.

Mitigation of risks

While PSD2 is a directive brought into effect to stimulate innovation and development within the Payments sector, a number of risks arising as a result have also been identified. The following should be considered in order to reduce the risk of transacting under PSD2 regulation to an acceptable level.

To protect customers, the identified risks need to be appropriately mitigated through sound operational risk management practices by all the players involved (i.e. banks and third parties) that address the security, business continuity and robustness of operations, both in the internal systems of the different parties as well as in the transmission or communication between them. This is particularly challenging in the case of third-party players rather than regulated financial institutions, who often lack the risk management frameworks that are common practice in the banking sector, with detailed policies, procedures and internal and external controls. Financial institutions should develop and document an information security policy that should define the high-level principles and rules to protect the confidentiality, integrity and availability of financial institutions’ and their customers’ data and information ([Carr18]) .

This policy is identified for PSPs in the security policy document to be adopted in accordance with Article 5(1)(j) of Directive (EU) 2015/2366. Based on the security policy information, financial institutions should establish and implement security measures to mitigate the information and communication technology and security risks that they are exposed to. These measures should include policies and controls in place over change management, logical security, physical security, operations security, security monitoring, information security reviews, assessment and testing, information security training and awareness and incident and problem management ([JTFP19]).

PSPs are expected to develop a security policy that thoroughly describes the measures and process that they have in place for the purposes of protecting customers against fraud. PSPs are expected to implement SCA processes for customers accessing their accounts online, initiating electronic payments or carrying out transactions through remote channels. As these activities have a high degree of risks, the PSD2 mandates PSPs to implement appropriate security processes to reduce the incidence of risk. Adopting appropriate SCA processes will promote confidentiality of users and assure the integrity of communication between participants regarding the transactions taking place on any particular platform ([Adey19]).

The implementation of PSD2 will contribute to building new relationships and data partnerships between financial institutions, which helps protect customers interests and improve transactional oversight. To capitalize on the vast amounts of data being channeled through PISPs and AISPs, banks must, however, invest in technology that finds the patterns that indicate crime. PSPs need to share transaction data and intelligence through a central hub that is underpinned by the necessary legal permissions and security to ensure compliance with GDPR. The risk of attack can be mitigated by following a sound API architectural approach, one that integrates security requirements and tools into the API itself. By adding more layers of fraud protection and authentication to APIs, banks could potentially integrate features like access control and threat detection directly into data-sharing offerings, allowing them to be proactive, rather than reactive, when it comes to securing APIs.

All the involved parties such as banks and TPPs need to work in creating a risk tolerant control framework and implementing the control objectives from the RTS guidelines and having specific payment related control activities. The banks and TPPs should coordinate with each other and work on this together to standardize the approach and methodology and discuss the smoothening of the overall process for the consumers with their market competitors.

Conclusion

PSD2 has been put in place in order to stimulate the payments industry creating innovation and broadening the market for payment service providers. As the services of TPP relies on use of sensitive personal and financial data, it opens up the market to a greater number of competitors as well as being heavily reliant on the IT infrastructure of several parties, a number of risks have been identified. Whilst these risks have been identified, so have a number of mitigating measures to reduce the overall operational risks around privacy, security and transaction and fraud risk to an acceptable level. Careful consideration however should be taken by all parties involved in payments services. Further, the regulators will take an active role to ensure a safe and secure payment landscape as part of the mitigation of risks identified in the market through the requirement of certain controls that should be in place for licensed organizations. Due to the dynamic nature of this industry and rapid development of technology, we can expect the landscape of services, and therefore the associated risks, to also develop at a rapid pace. With robust risk management strategies in place, there is an opportunity for payment services community to revolutionize the industry and provide a wide range of innovative payment products and services.

References

[Adey19] Adeyemi, A. (2019, January 21). A New Phase of Payments in Europe: the Impact of PSD2 on the Payments Industry. Computer and Telecommunications Law Review, 25(2), pp. 47-53.

[Adye18] Adyen (2018, August 28). PSD2: Understanding Strong Customer Authentication. Retrieved April 30, 2020, from: https://www.adyen.com/blog/psd2-understanding-strong-customer-authentication

[Benn18] Bennett, B. et al. (2018, March 16). Overlap Between the GDPR and PSD2. Inside Privacy. Retrieved from: https://www.insideprivacy.com/financial-institutions/overlap-between-the-gdpr-and-psd2/

[Blea18] Bleau, H. (2018, October 3). Prepare for PSD2: Understanding the Opportunities and Digital Risks. RSA. Retrieved from: https://www.rsa.com/en-us/blog/2018-10/prepare-for-psd2-understanding-the-opportunities-and-digital-risks

[Carr18] Carr, B., Urbiola, P. & Delle, A. (2018). Liability and Consumer Protection in Open Banking. IIF.

[Craw17] Crawford, G. (2017). The Ethics and Financial Issues of PSD2: Demise of Banks and Other Risks. Moral Cents, 6(1), pp. 48-57.

[EBA18] European Banking Authority (EBA) (2018, July 18). Final Report on Fraud Reporting Guidelines onder PSD2. Retrieved May 5, 2020, from: https://eba.europa.eu/sites/default/documents/files/document_library//Final%20Report%20on%20EBA%20Guidelines%20on%20fraud%20reporting%20-%20Consolidated%20version.pdf

[EBA19a] European Banking Authority (EBA) (2019, October 16). EBA publishes Opinion on the deadline and process for completing the migration to strong customer authentication (SCA) for e-commerce card-based payment transactions. Retrieved from: https://eba.europa.eu/eba-publishes-opinion-on-the-deadline-and-process-for-completing-the-migration-to-strong-customer-authentication-sca-for-e-commerce-card-based-payment

[EBA19b] European Banking Authority (EBA) (2019, November 29). Final Report on Guidelines on ICT and Security Risk Management.

[EPC19] European Payments Council (EPC) (2019, December 9). 2019 Payment Threats and Fraud Trends Report. Retrieved from: https://www.europeanpaymentscouncil.eu/sites/default/files/kb/file/2019-12/EPC302-19%20v1.0%202019%20Payments%20Threats%20and%20Fraud%20Trends%20Report.pdf

[EuCo17] European Commission (2017, November 27). Payment Services Directive (PSD2): Regulatory Technical Standards (RTS) enabling consumers to benefit from safer and more innovative electronic payments. Retrieved April 30, 2020, from: https://ec.europa.eu/commission/presscorner/detail/en/MEMO_17_4961

[Gruh19] Gruhn, D. (2019. September 30). 5 Things You Need to Know Right Now About Secure Communications for PSD2. Entrust Datacard. Retrieved April 30, 2020, from: https://www.entrustdatacard.com/blog/2019/september/five-things-to-know-about-secure-communications-for-psd2

[Hask] Haskins, S. (n.d.). PSD2: Let’s open up about anti-money laundering and open banking. Retrieved from: https://www.paconsulting.com/insights/psd2-lets-open-up-about-anti-money-laundering-and-open-banking/

[Hoor17] Hoorn, S. van der (2017, July 19). Betekent PSD2 een inbreuk op de privacy? Retrieved from: https://www.banken.nl/nieuws/20354/betekent-psd2-een-inbreuk-op-de-privacy

[JTFP19] JT FPS (2019, September 10). What are the new risks that PSD2 will bring and how to cope with them? JT International Blog. Retrieved from: https://blog.international.jtglobal.com/what-are-the-new-risks-that-psd2-will-bring-and-how-to-cope-with-them

[KeBe18] Kennisgroep Betalingsverkeer, NOREA (2018). PSD2.

[KPMG20] KPMG (2020). Sustainable compliance amidst payments modernization. Retrieved from: https://advisory.kpmg.us/articles/2020/sustainable-compliance.html

[Lamm17] Lammerts, I. et al. (2017). The Second European Payment Services Directive (PSD2) and the Risks of Fraud and Money Laundering. Retrieved May 5, 2020, from: https://www.amlc.eu/wp-content/uploads/2019/04/The-PSD2-and-the-Risks-of-Fraud-and-Money-Laundering.pdf

[Mant18] Manthorpe, R. (2018, April 17). What is Open Banking and PSD2? WIRED explains. Wired. Retrieved April 30, 2020, from: https://www.wired.co.uk/article/open-banking-cma-psd2-explained

[McIn20] McInnes, S. et al. (2020). Dutch Data Protection Authority investigates account information service providers. Retrieved May 1, 2020, from: https://www.twobirds.com/en/news/articles/2020/netherlands/dutch-data-protection-authority-investigates-account-information-service-providers

[Meni19] Menikgama, D. (2019, May 12). A Deep Dive of Transaction Risk Analysis for Open Banking and PSD2. Retrieved May 5, 2020, from: https://wso2.com/articles/2019/05/a-deep-dive-of-transaction-risk-analysis-for-open-banking-and-psd2/

[OBE20] Open Banking Europe (2020). Infographic on TPPs. Retrieved from: https://www.openbankingeurope.eu/resources/open-banking-resources/

[PrFi19] Privacy First (2019, January 7). European PSD2 legislation puts privacy under pressure. Privacy First demands PSD2 opt-out register. Retrieved from: https://www.privacyfirst.eu/focus-areas/financial-privacy/672-privacy-first-demands-psd2-opt-out-register.html

[PYMN18] PYMNTS (2018). As PSD2 Gets Off the Ground, Fraudsters Gear Up. Retrieved from: https://www.pymnts.com/fraud-prevention/2018/psd2-fraud-attacks-digital-payments-unbundled-banking/

[Sing19] Singer, A. (2019, December 24). Infographic: What Europe really thinks about PSD2. Retrieved from: https://www.riskified.com/blog/psd2-survey-infographic/

[Zepe19] Zepeda, R. (2019, 27 October). PSD2: Regulation, Strategy, and Innovation. Finextra. Retrieved from: https://www.finextra.com/blogposting/18057/psd2-regulation-strategy-and-innovation

Outsourcing

Service providers of payment and account information services are required to obtain a license issued by De Nederlandsche Bank (hereafter DNB) or by another supervisory authority in the European Union. The license application process covers various topics. One topic that is increasingly receiving attention from the supervisory authority in the application process is outsourcing. With the introduction of the “EBA Guidelines on outsourcing arrangements” (2019), the requirements for financial institutions on how to enter into, monitor and control outsourcing relationships became more stringent. Ensuring compliance with these guidelines and associated laws and regulations is key for payment service providers to obtain their license in a timely manner.

Introduction

On 30 September 2019, the “guidelines on outsourcing arrangements” (hereafter Guidelines) of the European Banking Authority (hereafter EBA) entered into force. The Guidelines ([EBA19]) describe the way in which financial institutions enter into, monitor and control outsourcing relationships. All outsourcing agreements entered into on or after this date must comply with the new Guidelines. Existing outsourcing agreements are subject to a transitional regime, whereby the agreements must be adapted in accordance with the Guidelines on the next occasion when the contract can be awarded, in any case before 31 December 2021. Refer to Figure 1 for a graphic overview of the timeline.

C-2020-2-Boll-01-klein

Figure 1. Timeline implementation of EBA Guidelines on outsourcing arrangements. [Click on the image for a larger image]

The General Data Protection Regulation (Regulation (EU) 2016/679) also includes provisions on the management of third parties strictly applicable to financial institutions, which have been woven into the Guidelines without adding any specific new data obligations. Hence, it is imperative for financial institutions to ensure that personal data are adequately protected and kept confidential when outsourcing for example IT, Finance, data or payment services.

Ensuring compliance with these Guidelines and associated laws and regulations is key for payment service providers to obtain their license in a timely manner. This applies specifically and is not limited to sound governance arrangements, third-party risk management, the due diligence process, the contractual phase, security of data and systems, outsourcing to cloud providers, access to information and audit rights.

This article will first outline the key requirements from the Guidelines for each phase of the outsourcing lifecycle before providing direction concerning the impact on the financial sector, including regulators, financial institutions and service providers.

Comprehensive outsourcing guidelines at European level

Outsourcing is a popular way to gain access to (technological) innovations and economies of scale. However, outsourcing also creates new risks for financial institutions, third parties and regulators. The new Guidelines aim to identify, address and mitigate these risks.

The Committee of European Banking Supervisors (CEBS), the predecessor of the EBA, published outsourcing guidelines in 2006. These guidelines were repealed when the Guidelines entered into force on 30 September 2019. The new Guidelines also replace the EBA recommendations for outsourcing to cloud service providers published in 2018. With the new Guidelines on Outsourcing arrangements, the EBA is introducing harmonized guidelines, which will set a new standard for financial institutions within the EU. This is in line with the call from supervisory authorities for more overarching regulations instead of a complex collection of separate and local directives. In addition, more stringent requirements are introduced. For instance, financial institutions now have to report all outsourcing of critical or important functions whilst earlier this was only the case for outsourcing critical or important functions to cloud service providers. Table 1 shows an overview of new and repealed guidelines.

C-2020-2-Boll-t01-klein

Table 1. Status guidelines and recommendations. [Click on the image for a larger image]

Guidelines for outsourcing: the financial institution must not become an empty shell

The Guidelines require that the outsourcing policy of financial institutions cover the full outsourcing lifecycle, with risks and responsibilities being addressed for each phase in the lifecycle. Figure 2 shows a graphic overview of the outsourcing lifecycle. In order to clearly indicate the requirements for each phase, the Guidelines consist of the following components:

  1. Proportionality and group application
  2. Assessment of outsourcing agreements
  3. Governance framework
  4. Outsourcing process

C-2020-2-Boll-02-klein

Figure 2. Outsourcing lifecycle. [Click on the image for a larger image]

In order to ensure full compliance with the guidelines in each phase of the outsourcing lifecycle, a detailed analysis should be performed to draft an approach for effective management of outsourcing risks. Each entity should assess which particular controls and measures are already in place and identify the gaps to the Guidelines. The KPMG control framework (see Figure 3) is an example of which aspects of the Guidelines are considered and which aspects can help you comply with the requirements of the new regulation.

C-2020-2-Boll-03-klein

Figure 3. KPMG control framework. [Click on the image for a larger image]

Below you will find a short explanation of the most important requirements of the Guidelines.

A. Proportionality and group application

The Guidelines apply to the entire corporate group and therefore also to its subsidiaries. This way, an adequate and consistent application of the Guidelines is imposed, also when subsidiaries are established outside the EU.

The Guidelines emphasize the principle of proportionality. Financial institutions that wish to outsource business activities are required to weigh up the nature, scale and complexity of these activities so that the outsourcing risks can be estimated, and appropriate measures can be implemented. However, this does not mean that the responsibility for the business activities can be transferred to the service provider. Both the Guidelines and regulator’s publications emphasize the importance of financial institutions retaining responsibility. The EBA specifies that certain management tasks may never be outsourced, including determining the financial institution’s risk profile and management decision making.

Even though ultimate responsibility will always remain with the governing body, financial institutions must ensure that a succession of outsourced activities is not created while they only retain final responsibility, a so-called “empty shell”. Sufficient in-house knowledge and experience must be present to guarantee the continuity of the financial institution and to maintain effective supervision of (the quality of) the services offered by the service provider.

B. (Re-)assessment of outsourcing agreements

In the first instance, it must be determined whether the activities qualify for outsourcing. The Guidelines stipulate that outsourcing exists when the outsourcing of activities is ongoing and recurrent in nature. One-off advice for a legal matter or the hiring of a third party for maintenance work on a building is therefore not considered as outsourcing. The EBA has also included a number of examples in the Guidelines of activities that are not considered as outsourcing, regardless of the recurrent nature:

  • Outsourcing services that would otherwise not be carried out by the financial institution. These include cleaning services, catering and administrative support, such as mail rooms, receptions and secretariats;
  • Outsourcing services that, due to the laws and regulations, are assigned to third parties (for example, an external accountant for auditing the annual accounts);
  • Market information service providers, such as Bloomberg and Standard & Poor’s;
  • Clearing and settlement activities for securities transactions.

The Guidelines hold the financial institution responsible for having a proper outsourcing policy that addresses all aspects in detail. They contain extra requirements for the outsourcing of critical or important functions, and a thorough analysis of the outsourcing risks must be carried out. Furthermore, with intra-group outsourcing the “arm’s length principle” must be followed, meaning that this should be carried out as if one were dealing with an independent third party.

The Guidelines particularly focus on outsourcing to service providers that are established in cost competitive countries outside of the EU. Aspects that must be considered are, among others, social and ethical responsibility, information security and privacy, but also specifically consider the powers of local supervisors and the assurances that must be provided to ensure effective supervision (such as access to data, documents, buildings and personnel).

C. Governance framework

The Guidelines have strict requirements when it comes to the governance framework of financial institutions. Below are a number of framework conditions:

  • Outsourcing may never lead to the delegation or outsourcing of responsibilities relating to the management of the financial institution;
  • The responsibilities for the documentation, management and monitoring of outsourcing agreements must be clearly established in the outsourcing policy. This policy must be reviewed and/or updated on a regular basis;
  • Business continuity and exit plans must be present for the outsourcing of critical or important functions. These plans must be tested regularly and revised where necessary. Sufficient in-house knowledge and experience must be present to guarantee the continuity of the company and prevent the institution from becoming an “empty shell”;
  • The internal audit function carries out an independent review of the outsourcing agreements and in doing so, follows a risk-based approach. It is important that conflicting interests are also assessed as part of the review. These must be identified, assessed and managed by management;
  • An outsourcing register must be maintained that includes all the information about outsourcing agreements at group and entity level. This register is necessary for providing an accurate and complete report on outsourcing to the supervisory authorities.

D. Outsourcing process

The Guidelines describe the requirements for the outsourcing process. A number of framework conditions are briefly summarized below, whereby the Guidelines follow the outsourcing lifecycle:

  • A pre-outsourcing analysis must be carried out before an outsourcing agreement is entered into;
  • Before the outsourcing commences, the potential impact of the outsourcing on the operational risk must be assessed so that appropriate measures can be taken;
  • Before entering into an outsourcing agreement, it should be assessed during the selection and assessment process whether the service provider is suitable. The financial institution must also analyze where the services are being provided (in or outside the EU, for example);
  • The rights and obligations of the financial institution and the service provider must be clearly assigned and established in a written agreement;
  • The service provider’s performance and the outsourcing risks must be continuously monitored for all outsourced services, with a focus on critical and important functions. Outsourcing of critical and important functions must be reported to the supervisory authority. Any necessary updates to outsourcing risks or performance should have appropriate change management controls in place;
  • There must be a clearly defined exit strategy for the outsourcing of critical and important functions that is in line with the outsourcing policy and business continuity plans.

Impact on the financial sector

The new Guidelines do not only affect financial institutions, but also regulators and service providers. However, the impact of the Guidelines will vary between those who are affected.

Regulators will monitor a new form of concentration risk

Technological innovation is one of the key themes of DNB’s “Focus on Supervision 2018-2022”. The analysis of the consequences and emerging risks of a more “open” banking industry on the prudential and conduct supervision are strongly related to the publication of the new Guidelines on outsourcing arrangements.

In addition to the supervision of financial institutions, the new Guidelines make the DNB responsible for monitoring concentration risk. This risk arises when certain business activities are outsourced by different financial institutions to the same service provider. This can jeopardize the continuity and operational resilience of financial institutions when the service provider experiences (financial) problems. As outsourcing agreements are currently not, or not fully, registered centrally, there is currently no complete overview of the concentration risk.

In 2017, DNB conducted a thematic review of banks, investment firms and payment institutions into the scope and control of outsourcing risks. In June 2018, this resulted in the “Good practices for managing outsourcing risks”, which explains, among other things, the requirement for financial institutions to report outsourcing of significant activities to the supervisory authority. Currently, DNB maintains a register of all ongoing outsourcing agreements to cloud service providers. The new Guidelines further expanded this reporting obligation to all outsourcing of critical and important functions in order to obtain a complete overview of (sub-)outsourcing by financial institutions. This enables the regulator to monitor the concentration of outsourcing and manage the concentration risk more effectively. Furthermore, it enables the DNB to monitor that no financial institutions are emerging where virtually all activities have been outsourced and the institution itself is no more than an “empty shell”.

The Guidelines stress that financial institutions should include a clause in the outsourcing policy and agreement that gives the DNB and other supervisory authorities the right to carry out inspections as and when deemed necessary. Although this clause was already made mandatory in previous EBA guidelines, in practice, it appears that the clause is often not included in outsourcing agreements.

Financial institutions are reminded of their duty of care

The new Guidelines will have a major impact on financial institutions, whereby the problems and challenges can be divided into four general categories:

  1. Retaining (ultimate) responsibility and preventing an “empty shell”
  2. Operational resilience of financial institutions
  3. Central recording of outsourcing and management information
  4. Increasing competition for banks
A. Retaining (ultimate) responsibility and preventing an “empty shell”

To determine the tasks and responsibilities of both the financial institution and the service provider, the outsourcing policy must be evaluated and revised where necessary in order to ensure alignment with the Guidelines. Furthermore, it is recommended to appoint one responsible party (unit, committee or CRO) to monitor the risk and compliance with the regulations so as to manage the outsourcing risks effectively. It is therefore important that outsourcing agreements concluded with service providers are reviewed and adapted to ensure alignment with the requirements set out in the Guidelines.

B. Operational resilience of financial institutions

With the increasing interest in outsourcing business activities, a clear shift from operational risks to supplier risks can be seen. The concentration risk has already been briefly described above, but to an increasing extent there is also the step-in risk that the financial institution itself must provide support to help the service provider remain operational when it finds itself in (financial) difficulty. This step-in risk must be evaluated prior to entering into an agreement and must be managed throughout the duration of the outsourcing and included in the Internal Capital Adequacy Assessment Process (ICAAP).

C. Central recording of outsourcing and management information

Analyses, inspections and surveys of supervisory authorities, among others, have shown that many institutions do not have a central outsourcing register and that management information concerning outsourcing is often sparse. Management often has insufficient insight into the scope of the outsourcing and the relevant risks. In order to fulfil the notification obligation to DNB, financial institutions must create and maintain their own outsourcing register. In addition, there is also the risk that the outsourcing of activities is wrongfully not considered as outsourcing. As a result, the outsourcing is not included in the outsourcing register and is not reported to the regulator. Finally, the assessment of whether functions are critical or important can be somewhat subjective and may lead to an incorrect categorization, with the danger being that the risks are not evaluated and managed according to the outsourcing policy.

D. Increasing competition for banks

In addition to the expansion and tightening of laws and regulations, the banking sector is also facing a rise in new entrants such as FinTech and BigTech companies. With the arrival of non-banking institutions that offer payment services and more, banks are facing increasing competition. A strategic choice can be made to outsource instead of innovating themselves, whereby faster and more efficient access to (technological) innovations can be obtained.

Service providers are not excluded: new requirements set by the Guidelines

The new Guidelines will have a major impact not only on financial institutions, but also on service providers. Although they do not directly fall within the scope of the Guidelines, financial institutions are expected to impose the requirements on service providers in order to comply with the new Guidelines. As a result, FinTech companies and other entrants will face the challenge of remaining innovative and competitive in a rapidly changing market, while at the same time confronting the administrative challenges of (indirectly) complying with the Guidelines. In particular, implementing robust management processes and meeting (internal) documentation requirements can significantly increase the burden on emerging service providers.

In short, the new Guidelines have a far-reaching impact

The Guidelines have a far-reaching impact on the financial sector and on banks and their service providers, in particular. The governance framework of the institutions should be reviewed and possibly revised regarding several aspects to ensure compliance with the new regulations. In addition, with the increase in outsourcing of activities, it is becoming increasingly important for financial institutions to have good internal controls in place.

Built-in controls play an important role in this, such as the “three lines of defense”1 model in which segregation of duties and monitoring by independent functions are maintained. Adapting the governance framework, outsourcing policy, processes, outsourcing agreements, etc. is time-consuming and needs to be done thoroughly, but above all, in a timely manner in order to avoid sanctions by supervisory authorities.

Conclusion

The Guidelines came into effect on 30 September 2019. It is therefore important that financial institutions and service providers carry out a detailed review of, among other things, the outsourcing policies and agreements and revise them where necessary in order to comply with the new Guidelines. Specifically for service providers of payment and account information services who find themselves in the license applications process, ensuring compliance with the Guidelines and associated laws and regulations is key to obtaining the license in a timely manner.

In practice, we see that organizations often underestimate the detailed review and that the necessary adjustments to comply with the Guidelines prove to be more complex than initially thought. Reviewing and adjusting the outsourcing policy is often not possible without an update of the governance policy, which creates the risk that parts are overlooked and inconsistencies occur between the various documents. It is therefore important that institutions carry out a timely and thorough review in order to avoid challenges due to time pressure and complexity.

In addition, we would like to stress that institutions must be careful not to become an “empty shell” due to the lack of substance. As described above, the institution must retain ultimate responsibility. With the new Guidelines, there will be a renewed regulatory focus on this area, with potentially far-reaching consequences if the conditions of the licenses are no longer met.

Notes

  1. In the “three lines of defense” model, the risk management, compliance and actuarial function form the second line and the internal audit function forms the third line, while the operational business is conducted in the first line. In such an arrangement, the four key functions operate independently from the first line and from each other. The operationally independent functioning of key functions does not exclude effective cooperation with other (key) functions ([DNB18]).

References

[DNB18] De Nederlandsche Bank (2018). Operationeel onafhankelijke en proportionele inrichting van sleutelfuncties. Retrieved from: https://www.toezicht.dnb.nl/3/50-237420.jsp

[EBA19] European Banking Authority (2019, 25 February). Guidelines on outsourcing arrangements. Retrieved from: https://eba.europa.eu/regulation-and-policy/internal-governance/guidelines-on-outsourcing-arrangements

Emerging from the shadows

Shadow IT might sound threatening to some people, as if it originates from a thrilling detective novel. In an organizational context, this term simply means IT applications and services that employees use to perform their daily activities and that are not approved or supported by the IT department. With recent developments where many people have to work from home, employees are reaching out to Shadow IT even more. Although these applications can be genuinely valuable and help employees with innovation, collaboration and productivity, they can also open the door to unwanted security and compliance risks. In this article, we take a look at the challenges presented by Shadow IT, and the methods to manage them, so that the risks do not outweigh the benefits.

The shifting challenges of Shadow IT

As bandwidth and processing power have grown, software companies have invested heavily in cloud-based software and applications. Recent research ([Syma19]) suggests that companies largely underestimate the number of third-party applications being used in their organization – with the actual amount of apps in use being almost 4 times higher on average. Some of these applications have been immensely valuable, bringing about digital transformation by speeding up processes, saving costs, and helping people to innovate. They can also point to any software needs: for example, if its employees are signing up for a cloud-based resource management tool, it may show that the company’s existing offerings are not up to the job. However, these applications may bring certain risks and challenges if not managed properly, as outlined below.

C-2020-1-Kulikova-00-klein

  1. Data leaks and data integrity issues

    Data is the main factor to be considered when it comes to the use of unsanctioned or unknown applications to store or process enterprise data. When less secure applications are used, there’s a high risk of potential confidential information falling into the wrong hands. Also, the usage of too many Shadow IT services with data stored across all of them does not benefit the organizational IT portfolio and reduces the value and integrity of data.
  2. Compliance and regulatory requirements

    Legislations such as GDPR, or local regulations for data export, have raised the level of scrutiny and massively increased the penalties for data breaches, especially around personal data. Business or privacy-sensitive data may be transferred or stored in locations with different laws and regulations, possibly resulting in regulatory and non-compliance incidents. There is also a risk of not being compliant with software licensing or contracts if employees agree to the terms and conditions of certain software without understanding its implications or involving the right legal authority.
  3. Assurance and audit

    In an ideal scenario, IT or risk departments could simply run regular audits to identify and either accredit or prohibit specific applications. Practically, it’s an impossible task. It is not unusual for large organizations to run thousands of Shadow IT applications. Yet the IT and risk departments that are trying to reduce this amount, and understand the usage and associated risks, can only handle a few hundred applications per year at best.
  4. Ongoing and unknown costs

    Shadow IT can be expensive, too. When businesses don’t know which applications are already in use, they often end up using the wrong services, or overpaying for licenses and subscription costs. For instance, multiple departments could be using unsanctioned applications to perform their day to day activities. As the usage of these applications occurs under the radar, the organization cannot take advantage of competitive rates, assess security requirements, or request maintenance and support services directly from the application provider that would benefit them.
  5. Increased administrative burdens

    Why can’t corporate IT departments simply solve the problem by banning the use of these applications? They can, but doing so eliminates any productivity gains that the business may be getting, and probably damages employee engagement in the process. Worse still, employees may look for alternative tools that are not on the prohibited list, but may in fact be even riskier.

Solution: Converting Shadow IT to Business Managed IT

We propose the following way forward – to give business users ownership of Shadow IT risk and involve them in the risk management process, instead of leaving it entirely up to IT or risk departments. Applications and services that are known to an organization and have successfully passed the risk management process, are called Business Managed IT. According to [Gart16], Business Managed IT addresses the needs of both IT and the business in “selecting, acquiring, developing, integrating, deploying and managing information technology assets”.

Research ([Harv19]) states that almost two-thirds of organizations (64%) allow Business Managed IT investment, and one in ten actively encourage it. They also found out that organizations that actively encourage Business Managed IT are much more likely to be significantly better than their competitors in a number of areas, including customer experience, time to market for new products (52% more likely), and employee experience (38% more likely). [Forr18] noted that the majority of the digital risk management stakeholders are information security (50%), threat intelligence (26%) or IT (15%) and are encouraged engaging other teams that use the applications to set the Business Managed IT strategy.

We see many organizations taking small steps towards Business Managed IT as a strategy within in the Netherlands and the EU. Companies are increasingly aware of Shadow IT and some of them are already busy discovering, filtering, registering, and risk assessing Shadow IT apps. According to [Kuli16], most of these activities are typically performed manually with some help of automation – typically for blacklisting or whitelisting the apps or running Shadow IT discovery with Cloud Access Security Brokers (CASBs). The actual Shadow IT registration and risk management processes are usually done manually by IT or risk departments using lengthy risk questionnaires. The result is low throughput, resulting in businesses often waiting months or even years before the applications and services they want get the right internal approval.

We believe the future proof model will be more sustainable when the business becomes the actual owner of Shadow IT apps, including the process of their registration, risk management, and risk mitigation. Actual risk questionnaires should be simplified to focus on what’s really important in identifying actual risk and the required mitigating measures. This way, the business can try a new risk role while not being tech-savvy, and IT and risk departments can start focusing on cases where their involvement is really required – for example situations with high-risk apps, where a certain application is better to be run centrally by IT, and not owned by the business. For the lower risk scenarios, business ownership means that apps and services are available without long delays.

Business Managed IT is a strategy and “mind-set”, and the results can be achieved in multiple ways. We encourage organizations to follow what businesses are already doing in their daily work – digitization, automation, analysis – which in the case of Shadow IT risk management means automating the risk management processes with the help of dedicated software. As shown in the maturity graph in Figure 1, not all companies are at this stage – some are still heavily dependent on manual work to run the required processes.

C-2020-1-Kulikova-01-klein

Figure 1. Maturity of Shadow IT risk management. [Click on the image for a larger image]

Setting the groundwork for Business Managed IT

Business Managed IT is an attractive approach but getting the business involved in IT is a new paradigm and should be introduced with care. Implementation requires cultural change and proper communication. The following five steps can help organizations get started:

  1. Define Shadow IT risk ownership by the business and discuss it at a senior level to ensure their support and buy-in.
  2. Set a policy and target operating model for business ownership of Shadow IT, clearly specifying what such ownership means. How will the business work with IT? When will IT and/or the risk department get involved? What are the escalation chains in case there are any delays or uncertainties in risk management process?
  3. Secure involvement of change and communications departments. Focus on increasing business awareness with regard to the upcoming changes. Involve people who are skilled at organizational change management rather than relying on IT or risk experts.
  4. Tackle the Shadow IT monster one step at a time. First, initiate a pilot. Then, deploy the new model with one – ideally more mature – department or operating unit to learn lessons that can be applied during further rollout.
  5. Monitor and adjust. Work closely with the business during the roll-out period. Questions and feedback from the business are good as it helps improve the approach – silence is a bad indicator.

An organizaton’s journey

The organizaton: A global group of energy and petrochemicals companies with 86,000 employees in 70 countries.

The challenge: The organizaton required a significant improvement in their risk management practices around Shadow IT, driven by the vast amount of known Shadow IT applications, the larger unknown services, and audit findings around security and privacy of data stored in such services. At the start of the engagement, the organizaton didn’t have polices or procedures that outlined how employees should use such applications and services, or how the IT and risk management teams could have insight into and control over this usage.

The approach:

  1. Shadow IT policies and procedures were created and approved by senior IT and risk stakeholders.
  2. Business ownership of Shadow IT apps was defined.
  3. The responsibilities of IT and risk management departments changed to monitoring only, with their involvement required only for high risk cases.
  4. Change & communication teams were established to enable the change across the organization. Multiple trainings, videos, train the trainer and other learning materials were created to educate business users about new ways of working.
  5. Pilots and a hyper care period with handholding sessions were used to support any questions during the initial rollout.
  6. The organizatons used KPMG’s SaaS software built on top of Microsoft Azure Cloud to run the newly established process for Shadow IT. The software, connected to the organizaton’s application database, enabled the business to perform risk assessments of identified Shadow IT services, discover relevant risks, and automate the deployment and monitoring of controls. It also provided integrated risk insights to the IT and risk departments.

The value delivered:

Business users conducted over 4,000 risk assessments of Shadow IT applications in one year by completing a simple questionnaire. These assessments resulted in 1000 applications being decommissioned (due to the unacceptable risk exposure for the company, or applications deemed not anymore relevant) and specific controls being deployed based on risks identified through the assessments. Business users appreciated the central database of apps and associated risk ratings that was created as part of this process, which allowed users to look up available apps prior to purchasing anything extra. Businesses reached out more frequently to the IT and risk management departments with thoughtful questions, indicting their increased awareness and ownership of Shadow IT risks.

Valuable benefits beyond risk management

Effective risk management is even more challenging for large international enterprises in today’s context of digital transformation and evolving regulation. Organizations should assess and utilize its risk appetite and, accordingly, allow the business to continue using applications if they are deemed low risk or if there are sufficient mitigating controls in place. When an application poses a high risk, then a decision whether to discontinue its usage or to invest in remediation should be made with involvement of IT or risk management teams.

Business risk ownership and accountability adds an important layer of protection against data breaches and immediately strengthens and facilitates compliance. More importantly, IT becomes an enabler, rather than a department that is viewed as blocking the progress.

To support business ownership of IT and applications, more mature organizations can use automated technologies such as CASBs and the KPMG DRP to automate most of the critical BMIT workflows, such as Shadow IT applications discovery, application portfolio management, organizaton-specific risk assessments, control implementation, and monitoring and reporting.

For organizations that are still in the beginning of their journey to risk mitigate Shadow IT, an immediate automation of Business Managed IT workflows might be a step too far. In such cases, it is important to start adopting the mind-set of business ownership of IT risk through improved and simplified risk policies as well as business enablement programs, as this is the very first step for long-term business enablement, security and privacy of critical organizational data.

References

[Forr18] The Forrester New Wave (2018). Digital Risk Protection, Q3 2018, 2.

[Gart16] Gartner (2016). Gartner’s Top 10 Security Predictions. Retrieved from: https://www.gartner.com/smarterwithgartner/top-10-security-predictions-2016/

[Harv19] Harvey Nash / KPMG CIO Survey (2019). A Changing Perspective. Retrieved from: https://home.kpmg/xx/en/home/insights/2019/06/harvey-nash-kpmg-cio-survey-2019.html

[Kuli16] Kulikova, O (2016). Cloud access security monitoring: to broker or not to broker? Understanding CASBS. Compact 2016/3. Retrieved from: https://www.compact.nl/articles/cloud-access-security-monitoring-to-broker-or-not-to-broker/

[Syma19] Symantec (2019). Cloud Security Threat Report. Retrieved from: https://www.symantec.com/security-center/cloud-security-threat-report

How will blockchain impact an information risk management approach?

Blockchain is considered an emerging technology that has the potential to significantly transform the way we transact. The establishment of new asset classes and transactional models substitute conventional payment and settlement platforms. The major advantage that blockchain offers is transparency and elimination of custodial necessity. However, organizations implementing blockchain in their IT environment are also faced with a new set of risks arising from this distributed ledger technology. Before organizations can even consider implementing blockchain, they should understand its implications on their information risk management strategy and how this translates to their business. In this article we will take a closer look at blockchain and how it differs from the more ‘conventional’ information systems. Based on the uniqueness of blockchain technology, this article will introduce some of the key risks arising from the implementation of this technology in existing IT environments. In addition, the article will describe how these risks affect information risk management. Facebook’s Libra platform will be used to apply our insights to a real-life scenario. Lastly, the author will conclude with a brief approach on auditing blockchain systems and what IT auditors might take into consideration when faced with this technology.

Introduction

Blockchain is considered a breakthrough in the field of distributed computing and has the potential to completely disrupt existing transactional models and business processes. As shown in a global survey conducted by [Delo19] in 2019 (that polled over 1000 senior executives), the technology is increasingly being researched by both public as private organizations. One of the key results of the survey shows that “fifty-three percent of respondents say that blockchain technology has become a critical priority for their organisations in 2019” ([Delo19], p. 3). These developments are substantiated by Laszlo Peter, Head of KPMG Blockchain Services in the Asia Pacific: “Blockchain is certainly here to stay. While funding may have slowed in 2019, it simply shows the growing maturity of the market. It is a sign that investors are moving away from the ‘fear of missing out’ mentality (…) and are making more mature investment decisions and focusing on more meaningful initiatives” ([KPMG19], p. 16).

Given its newness, blockchain can still be considered an innovative type of technology. But there is something peculiar about innovative technologies and its application by organizations: innovation can be considered a journey into the unknown. Innovation is exploring how new technologies can be applied to business and IT processes, this brings uncertainty: after all, if you venture into the unknown, you are not particularly certain about what lies ahead; there are risks (downside and upside) as well as opportunities.

Given the profound impact that blockchain might have on organizations and the way they transact with(in) each other, a thorough information risk management strategy should be designed. The risk management approach should be able to identify and address the risks arising from blockchain and how blockchain-powered processes might impact the control environments surrounding these processes. Designing a risk management approach for blockchain will not only enable organizations to remain in control; it will also help organizations design and implement blockchain securely and appropriately in their business and apply the effective operation of governance structures for blockchains that are transacted by multiple organizations. However, before information risk management professionals can start to think of designing a blockchain risk management approach, it is essential that risk professionals profoundly understand blockchain, and how it differs from ‘conventional’ information systems.

Based on the relatively uniqueness of blockchain technology, this article will introduce some of the key risks arising from the implementation of this technology in existing IT environments and offer an impression on how these risks affect information risk management. This article will reflect on Facebook’s Libra platform to apply our insights to a real-life scenario. Lastly, you will find a high-level approach for auditing blockchain systems and what IT auditors might take into consideration when faced with this technology.

Understanding blockchain

Blockchain is considered a subset of distributed systems. In general, a distributed system can be defined as a group of independent computing elements working together to achieve a common objective ([Stee16]). Now, distributed systems are all around us: from airplanes to mobile phones, anything can be considered a distributed system to a certain degree. Most of these distributed systems are ‘closed’, where only authorized computing elements (i.e. agents) are able to access and operate within these systems. These agents trust each other, and communication is considered safe. This makes sense, as we wouldn’t want unknown agents to be able to access airplanes or our mobile phones and perform harmful activities.

Another example is the internet. In contrast to the two examples mentioned, the internet is a distributed system where it is possible for unknown agents that do not trust each other, to operate in and perform activities that might be considered harmful to other agents (such as yourself) or even the overall system. If we want to perform certain activities on the internet – such as sending money to a party that you do not necessarily trust – we rely on intermediaries such as financial institutions (banks) to ensure that the amount is actually debited to the bank account of the intended party and credited from the sending party. The banks function as a trusted third party that ensure that both parties involved in the transaction are not able to fraud each other.

How does this relate to blockchain and why exactly is this technology considered a breakthrough in the field of distributed computing ([Kasi18])? On a general level, blockchain is simply one of the ways for multiple parties to reach an agreement (i.e. consensus) on the state of the system (e.g. a ledger or a digital transaction being recorded on that ledger) on a given time without having to rely on a trusted third party or central authority (such as the bank in the example above). Systems that allow for this multi-party consensus are considered to be blockchains ([Weer19]). Where the ‘traditional’ distributed systems needed a trusted third party if transacting participants wanted to exchange information, value or goods without trusting each other, blockchains delegate this trust to the party’s participants themselves (i.e. endpoints); a trusted third party is no longer required.

This article is not intended to go into detail of how blockchain delegates trust to the participants (i.e. end points). However, to provide some understanding, a more technical definition introduced by [Rauc18] is provided below.

“A blockchain system is a system of electronic records that:

  1. enables a network of independent participants to establish a consensus around
  2. the authoritative ordering of cryptographically-validated (signed) transactions.
  3. These records are made persistent by replicating the data across multiple nodes and
  4. is tamper-evident by linking them together by cryptographic hashes.
  5. The shared result of the reconciliation/consensus process – the ledger – serves as the authoritative version for these records” ([Rauc18], p. 24).

It is important to understand that there are countless ways of designing a blockchain system. However, in the end, all blockchain systems are considered to have one primary objective: to facilitate multi-party consensus whilst operating in an adversarial environment ([Rauc18]). That is, an environment in which participants might not trust each other or behave in such a manner that it is not in line with the best interest of the overall system.

Permissioned versus permissionless

Broadly speaking, blockchains can be categorized “based on their permission model, which determines who can maintain them” ([Yaga18], p. 5). The Bitcoin network can be defined as a permissionless (public) blockchain as anyone is able to produce a block (consisting of transactions), read data that is stored on the blockchain and issue transactions on this blockchain network. Since the network is open for anyone to participate, malicious users might be able to compromise the network. In order to prevent this, “permissionless networks often utilize a multi-party agreement or consensus system that requires users to expend or maintain resources when attempting to produce blocks. This prevents malicious users from easily compromising the system” ([Yaga18], p. 5). In the case of the Bitcoin blockchain, the Proof of Work consensus mechanism is used where block producers are required to expend computational resources in order to produce a block ([Naka08]). Other consensus mechanism examples include Proof of Stake (Ethereum), Proof of Authority (Vechain) and Proof of Elapsed Time (Hyperledger Sawtooth). Although designed differently, all consensus mechanisms aim to discourage malicious behaviour on the blockchain network ([Weer19]).

Permissioned blockchains are restricted access networks: the parties responsible for maintaining the network are able to determine who can access it and a restricted amount of parties are authorized to produce blocks ([Cast18]) in the case of blockchains. Whereas permissionless blockchains are open for anyone, accessing permissioned networks requires approval from the authorised users of said network: “since only authorized users are maintaining the network, it is possible to restrict read access and to restrict who can issue transactions” ([Yaga18], p. 5).

The likelihood of arbitrary or even malicious behaviour on permissioned networks is smaller than on permissionless networks, as only authorized (thus, identified and trusted) users are able to access it. In case a user behaves malicious or not in the best interest of the entire network, access can be revoked by the parties maintaining the network. Although, malicious behaviour is discouraged as a result of the network’s restricted access and because a user’s identity needs to be determined, consensus mechanisms may still be used to ensure “the same distributed, resilient, and redundant data storage system as a permissionless network (…), but often do not require the expense or maintenance of resources as with permissionless networks” ([Yaga18], p. 5).

Risks arising from blockchain

Now that we have a basic understanding of blockchain and how it differs from the more ‘conventional’ IT systems, we can take a look at how blockchain technology might affect existing information risk management approaches when it is implemented in existing organizational IT environments. In order to keep this article brief, the author has selected the following set of key risks arising from blockchain that are worthwhile to address (see Figure 1).

Scalability & Continuity

Reaching consensus requires coordination and communication between nodes that are often spatially separated from each other and located within the participant’s internal IT environments. This might eventually result in a lack of scalability or even threaten the continuity of the blockchain system and the (business) process activities of organizations relying on the blockchain system.

Centralization & Collusion

A blockchain is comprised of independent nodes. Although these nodes are operating independently from each other, these nodes might be owned by a single organization or by a collaboration of organizations. Competitors might be blocked from transacting on this system or risk being restricted from using certain functionalities.

Interoperability

With the advent of blockchain adoption, interoperability between the technological generations may be a challenge. A blockchain cannot simply be installed in the existing IT environment of an organization as it must be connected to legacy IT systems, that usually have other compatibility limitations, or perhaps even to other blockchains.

Data Management & Privacy

Any transaction proposal that is accepted to the ledger is considered final. Incorrect, incomplete or even unauthorized transactions might result in unintended consequences such as degraded data integrity or violated privacy requirements due to the fact that personal data is accessible, and the transaction commits cannot be reverted (to adhere to the right to be erased/forgotten). Sensitive personal data cannot be stored directly on the blockchain, but rather ‘off-chain’ or on a ‘sidechain’ (parallel blockchain), whereby the blockchain does not contain personal data but points to the protected location where that data is stored and can be removed if needed.

Smart Contracts

Smart contracts are agreements between blockchain participants that are codified into the authoritative ledger. The contract is executed automatically when certain requirements (typically established by the parties involved) are met. If smart contracts are incorrectly designed, this might result in unintended and unforeseen consequences.

Consensus & Network

Achieving consensus in a blockchain generally involves a complex set of mathematical functions and coordination between the network nodes. In addition, in order to ensure that the (majority of the) nodes exhibit honest behaviour, economic game theory needs to be considered in the consensus process as well. If the consensus process is flawed, organizations transacting on this blockchain might be exposed to significant risks – both operational as financial.

Compliance

The immaturity of blockchain technology is visible in the regulatory space as well, where laws and governmental policies for applying and operating blockchain technology are still in an embryonal stage. In addition, by its very nature, blockchains allow for the transacting between parties that do not need to know or trust each other. This exposes an organization to the risk of participating in money laundering of terrorist financing.

Functional requirements

Careful considerations should be made regarding the decision to implement a blockchain; not only regarding the necessity of implementing a blockchain into an existing IT environment, but also which type to select. Selecting or developing a blockchain that does not align with the organization’s business or operating model needs might have significant consequences for the organization’s business activities that rely on the blockchain.

Cryptographic Key Management

Blockchains employ cryptographic functions such as hashing algorithms and public key cryptography to ensure the integrity of the overall system and guarantee the safety. Improper management of cryptographic key-pairs might result in unauthorized access of the system.

Third Party & Governance

Where the effective operation of traditional IT systems (i.e. every organization is the owner of their IT) primarily relies on the control environment of the organization itself, blockchains relies on both the overall control environment of the network as well as the control environments of the individual participating organizations. One can argue whether ‘third parties’ in a blockchain context are actually ‘second parties’. (See further the box on blockchain governance.)

C-2019-4-Weerd-01-klein

Figure 1. Domains where risks where might arise from using blockchain. [Click on the image for a larger image]

Impact of blockchain on information risk management

The field or information risk management is broad in nature and extensively covered in both academics and business. On a general level, information risk management (IRM hereafter) can be defined as “the application of the principles of risk management to an IT organisation in order to manage the risks associated with the field” ([Tech14]). To support the design of an effective IRM strategy, several standards and approaches have been published that aim to help organizations in managing IT risks and designing an IT control environment. Examples of these standards are the Handreiking Algemene Beheersing van IT-Diensten from NOREA, the ISO27001 framework from ISO, the COBIT standard or the COSO management model.

When we consider the abovementioned risks arising from blockchain, it appears that these risks primarily relate to the absence of a trusted third party or a central authority: where current IT environments of organizations can typically be thought of as centralized silos (operated and managed by a single party) that are logically separated from each other, blockchain powered IT environments dissolve these boundaries as organizations transact on the same system.

Extending this development to information risk management, with centralized IT environments, the Information Risk Management organization is primarily concerned with the internal control environment surrounding their centralized IT environment. Generally, this control environment is sufficient to address the risks arising from IT and facilitate the appropriate operation of the IT environment.

However, when organizations implement blockchain systems, they factually open up their IT environment to third parties (perhaps also unknown parties or competitors) that are not necessarily trusted by the organization (i.e. the organization will operate in an adversary environment).

Taking a closer look at Libra

If we look at this from a more practical perspective, let us take a closer look at Facebook’s Libra initiative: a consortium of major organizations – i.e. Facebook, Spotify, Uber and Vodafone – that develop their own blockchain with the objective of operating as a global currency transactional model ([Libr19]). The following stakeholders are involved in the management of the platform:

  • The Libra Association governs the network.
  • Libra Networks LLC develops the software and infrastructure.
  • The actual blockchain network consists of nodes ran by the individual Association members.
  • Users (consumers and other organisations) can operate on this network.

C-2019-4-Weerd-02-klein

Figure 2. Visualizing Libra’s actors and their relationships. [Click on the image for a larger image]

When we take a look at the relationship of the actors involved with Libra, one can argue that the key risks relate to the inherent properties of the Libra blockchain and its multi-party transactional model are as follows:

  1. Competitors are collaborating on the platform, but there is no guarantee of fair play and a level playing field.
  2. Node validators (organizations involved in the consensus process and validation of transactions taking place on the platform) have no access to each other and it is therefore difficult for these organizations to verify whether they are all adhering to the standards and requirements set by the governing body (the Libra Association) or whether they have an effective operation of their control environments.
  3. Furthermore, it is difficult for the governing members to verify that the developing party exercises its responsibilities in an objective manner and does not provide participants (e.g. Facebook) with a competitive advantage over other governing members, but also over organizations that are not part of the network’s governance body.

In order to ensure that all stakeholders involved are comfortable with transacting on the Libra platform, the mentioned risks (not limited to) should be addressed first. It appears that addressing these risks i.e. designing an effective information risk management strategy requires multi-party collaboration and governance (see also the box on a blockchain governance case).

Governance considerations

The governance considerations of a platform can make or break the success of not only your organization’s implementation but the continuity of the entire platform. An exemplary case is the IBM and Maersk supply chain platform TradeLens. In 2018, the companies announced a joint venture to unify the shipping industry on a common blockchain platform. The platform was developed within a governance model that put major decision-making power in the hands of the founders, allowing them to retain the intellectual property of the shared platform and forcing other logistical companies to invest significantly in blockchain platform software. This resulted in a reluctant reception and very limited onboarding of other participants, limiting the transaction volume via this platform. As a consequence, the tipping point for success couldn’t be reached. After restructuring the governance model, other companies, such as CSX, PIL and CEVA, decided to join.

The correct governance model for your platform is not a one-size-fits-all and depends on several factors. These factors include, but are not limited to:

  • strategy and mission-criticality
  • policy/decision-making and risk sharing
  • participant roles, responsibilities and representation
  • node management
  • type and variety of international regulatory jurisdictions
  • desired permission level of features
  • cost of ownership, incl. financing and cost charging
  • supervisory bodies and assurance

Auditing blockchain

To mitigate the risks arising from blockchain, organizations are able to design control environments surrounding their blockchain systems and business processes transacting on those systems. To give you an example of controls that might be designed, the author has included a small part of controls intended to mitigate risks related to the Centralization & Collusion domain and the Data Management & Privacy domain introduced earlier.

C-2019-4-Weerd-t01-klein

Table 1. An extract of a blockchain risk and control framework. [Click on the image for a larger image]

When we extend this to the field of IT audit, we might consider the approach of an IT auditor to become less singular and more driven from an ecosystem perspective. The IT auditor does not stop at the boundaries of the IT (control) environment of the organization; it extends to the control environment of the bigger network, consortium and the individual participants with which the organization transacts. Therefore, IT auditors need to equip themselves with the capabilities of auditing a governing network i.e. consortium and develop skillsets to properly assess multi-party risks.

In the author’s opinion, IT auditors will extend their focus to third party (smart) contracts, resolution models and how consensus is configured – both from a technical as well as an economic game theory perspective. The IT audit will need to stop treating IT environments as singular and start treating it as a risk ecosystem that is comprised of multiple actors.

For further details on assessing and auditing blockchain implementations, please refer to [KPMG18] and [ISAC19].

Conclusion

The topic of blockchain and its impact on information risk management can be elaborated on and encompass an entire book by itself. If organizations want to remain in control of their blockchain-enabled IT environment, only to consider that the internal IT control environment is no longer sufficient: organizations need to start taking into account the control environment of the entire blockchain network, but also the internal control environments of each participating organization acting as a node validator. The IT control environment of an organization implementing a blockchain therefore becomes an ‘ecosystem’ where its own control environment and information risk management strategy is dependent on the control environments of the broader ecosystem and its individual participants. In essence, the shift towards distributed ledger technology results in a shift to distributed control environments as well.

Blockchain technology has the potential to digitize supply chains, business processes, assets and transactions. How will the Information Risk Management organization and the IT auditor conduct their risk assessment? How can an effective control environment be designed when organisations become part of digital ecosystems? These are valid questions that ought to be resolved before organizations can think of harnessing the full potential of blockchain technology. The author is convinced that the Information Risk Management professional and IT auditor have an exciting future ahead of them and are able to provide a great contribution in helping transform organizations in an appropriate and controlled manner.

The author likes to thank Raoul Schippers for his addition on blockchain governance.

References

[Cast18] Castellon, N. , Cozijnsen, P. & Goor, T. van (2018). Blockchain Security: A framework for trust and adoption. Retrieved from: https://dutchblockchaincoalition.org/uploads/DBC-Cyber-Security-Framework-final.pdf.

[Delo19] Deloitte (2019). 2019 Global Blockchain Survey. Retrieved from: https://www2.deloitte.com/content/dam/Deloitte/se/Documents/risk/DI_2019-global-blockchain-survey.pdf.

[ISAC19] ISACA (2019). Blockchain Preparation Audit Program. Retrieved from: https://next.isaca.org/bookstore/audit-control-and-security-essentials/wapbap

[Kasi18] Kasiderry, P. (2018). How Does Distributed Consensus Work? Retrieved from: https://medium.com/s/story/lets-take-a-crack-at-understanding-distributed-consensus-dad23d0dc95.

[KPMG18] KPMG (2018). Blockchain Technology Risk Assessment. Retrieved from: https://home.kpmg/xx/en/home/insights/2018/09/realizing-blockchain-potential-fs.html.

[KPMG19] KPMG (2019). The Pulse of Fintech 2019. Retrieved from: https://home.kpmg/xx/en/home/campaigns/2019/07/pulse-of-fintech-h1-19-europe.html.

[Libr19] Libra Association Members (2019). An Introduction to Libra.

[Naka08] Nakamoto, S. (2008). Bitcoin: A Peer-to-Peer Electronic Cash System. Retrieved from: https://bitcoin.org/bitcoin.pdf.

[Rauc18] Rauchs, M. et al. (2018). Distributed Ledger Technology Systems: A Conceptual Framework. Retrieved from: https://www.jbs.cam.ac.uk/fileadmin/user_upload/research/centres/alternative-finance/downloads/2018-10-26-conceptualising-dlt-systems.pdf.

[Stee16] Steen, M. van & Tanenbaum, A.S. (2016). A brief introduction to distributed systems, Computing, 98, 967-1009.

[Tech14] Techopedia (2014). IT Risk Management. Retrieved from: https://www.techopedia.com/definition/25836/it-risk-management.

[Weer19] Weerd, S. van der (2019). An exploratory study on the impact of multi-party consensus systems for information risk management.

[Yaga18] Yaga, D. et al. (2018). Blockchain Technology Overview, NISTIR8202. Retrieved from: https://csrc.nist.gov/publications/detail/nistir/8202/final.