Skip to main content

Securing the quality of digital applications: challenges for the IT auditor

Since the advent of digital solutions, the ongoing inquiry into their reliability and security has been a central concern. An increasing number of individuals and companies are asking for assurance. There is a need for standards to report on the quality of the use of digital solutions. Primary responsibility rests with an organization’s management; however, incorporating an independent IT auditor can provide additional value.


Digital developments are happening at lightning speed. We are all aware of the many digital applications and possibilities in both our business and personal lives. Often, however, we only know and use 10 to 20 percent of the application possibilities of current solutions, and yet we are constantly looking for something new. Or is this all happening to us from an ever-accelerating “technology push”? The covid pandemic that started in 2020 showed us once again that digital tools are indispensable. Digital tools enabled us to remain connected and operational, facilitating ongoing communication among us.

How do we know if digital applications and solutions are sufficiently secure? Do the answers generated by algorithms, for example, reflect integrity and fairness? Are we sufficiently resilient to cyber-attacks and are we spending our money on the right digital solutions? These questions are highly relevant for directors and supervisors of organizations, as they must be able to account for their choices. Externally, the board report provides the basis for policy accountability. It is primarily retrospective in nature and has an annual cycle. The board report could explicitly discuss the digital agenda. The professional association of IT auditors (NOREA) is investigating whether an (external) IT audit ‘statement’ ([NORE21]) could also be added (see also this article on the new IT audit statement). Accountability for the quality of digital applications and whether everything is done securely, with integrity and effectively takes on new dimensions now that developments are happening at lightning speed, and everyone is connected to everyone else. Administrators, regulators as well as end users and/or consumers are looking for assurance that the digital applications and the resulting data are correct. Validation through assurance by an IT auditor serves as an effective tool for this purpose. A confirmation of quality on the digital highway must and can be found.

These issues are at play not only within organizations, but also in broader society. Protecting privacy is firmly under pressure, the numerous digital solutions are building a continuous personal profile. Also, there are painful examples of the use of algorithms in the public domain ([AR21]) that have seriously harmed a number of citizens. Responsible development toward more complex automated applications requires better oversight and quality control, according to the Court of Audit in its report on algorithms in 2021 ([AR21]). Issues of digital integrity, fairness, reasonableness and security have taken on social significance.

Coupled with the introduction of the Computer Crime Act (WCC I), an explicit link to accountability for computerized data processing emerged for the first time in the 1980s. Meanwhile, the Computer Crime Act III (WCC III) ([Rijk19]) has been in force since 2019, which takes into account many developments in the field of the Internet and privacy. As the final piece in the chain of control and accountability from the WCC I onwards, the auditor must explicitly express an opinion on the reliability and continuity of automated data processing as far as relevant for financial reporting according to Civil Code 2, article 393 paragraph 4. Over four decades have passed, and we now grapple with an expanding array of legislation governing the control of digital solutions. These solutions extend beyond administrative processes to impact all core business functions, bringing with them a shift in the perspective on associated risks.

In short, it’s time to consider how quality on the digital highway (such as security, integrity, honesty, efficiency, effectiveness) can be assured. How can accountabilities be formed, what role do managers and supervisors play in this, and how can IT auditing add value? As indicated, these questions play a role not only at the individual organizational level, but also at the societal level. For example, how can the government restore or regain the trust of citizens by explicitly accounting for the deployment of its digital solutions?

IT auditing concerns the independent assessment of the quality of information technology (processes, governance, infrastructure). Quality has many partial aspects; not only does it involve integrity, availability and security, it also involves fairness and honesty. The degree of effectiveness and efficiency can also be assessed. To date, the interpretation of IT auditing is still mostly focused on individual digital applications and still too limited when it comes to the entire coherence of digital applications that fit within the IT governance of an organization. IT auditing can be an important tool in confirming the quality or identifying risks in the development and application of digital solutions if it is used more integrally. This establishes a harmonious interplay between the organization’s responsibility for its IT governance and the validation of its quality by an IT auditor.

Technology developments

The COVID crisis has undeniably brought remote work to the forefront and has heightened the significance of adaptable IT. Several emerging trends underscore the landscape of digital solutions and advancements.

What’s noteworthy is that a considerable number of organizations exhibit an intricate blend of technology solutions, incorporating both legacy systems and contemporary online (front-office) solutions. Ensuring data integrity, keeping all solutions functioning in continuity, being able to make the right investments and paying for maintenance of legacy solutions, and planning for all of that is certainly not an easy task.

Let’s briefly highlight a few trends commonly cited by multiple authors ([KPMG20]; [Wilr20]):

  • Flexible work is becoming the norm. Last year, the cloud workplace – more than predicted – grew in popularity. Employees had to work from home, which requires a flexible and secure IT workplace.
  • Distributed cloud offers new opportunities for automation. The cloud will also continue to evolve, continuously creating new opportunities that support business growth. According to Gartner analysts ([Gart20]), one of these is the distributed cloud. It can speed up data transfer and reduce its costs. Storing data within specific geographic boundaries – often required by law or for compliance reasons – is also an important reason for choosing the distributed cloud. The provider of the cloud services remains responsible for monitoring and managing it.
  • The business use of artificial intelligence (AI) is increasing. Consider, for example, the use of chatbots and navigation apps. This technology will be increasingly prominent in business in the near future. The reason? Computer power and software are becoming cheaper and more widely available. AI will increasingly be used to analyze patterns from all kinds of data.
  • Internet of Behaviors. Data is now the lynchpin for much of business processes. Data provides insight and therefore plays an increasingly important role in making strategic decisions. This data-driven approach is also applied to changing human behavior. We also call this the Internet of Behaviors. Based on these analyses, suggestions or autonomous actions can be developed that contribute to issues such as human safety and health. An example is the smartwatch that tracks blood pressure and oxygen levels and provides health tips based on those data.
  • Maturity of 5G in practice. In 2020, providers in the Netherlands rolled out their first 5G networks. With 5G, you can seamlessly stay connected on the move or in any location without relying on Wi-Fi. Apart from higher data upload and download speeds, the big changes are mainly in new applications, especially in the field of the Internet of Things. Examples include self-driving cars and a surgeon operating on his patient a thousand kilometers away via an operating robot. Such applications are promising.

Management responsibilities

Driving and overseeing digital solutions is not a given. “Unknown makes unloved” still plays tricks here. The complexity of technology deters, the mix of legacy systems and new digital solutions does not make it very transparent, many parties manage part of the technology chain and the quality requirements are not always explicit.

Still, some form of “good governance” is needed. Fellow Antwerp professor Steven de Haes ([DeHa20]) has gained many insights in his studies on IT governance. In his view, governance needs to address two issues concerning digital solutions. The first is whether digital risks are managed, which requires a standard to test against. In line with the COSO framework (COSO: Committee of Sponsoring Organizations) often used in governance issues, (parts of) the international CoBiT framework (CoBiT: Control Objectives for Information Technology) ([ISAC19]) can be chosen. Management explicitly identifies the applicable management standards for digital solutions, ensuring the clear establishment of both their design and operational processes.

The second question is strategic in nature: are the digital developments correct? Is the strategy concerning the deployment of digital solutions correct and are the investments required correct? Answering this requires a good analysis of the organizational objectives and the digital solutions needed to achieve them. As indicated earlier, the main issues are effectiveness and efficiency.

Establishing a robust organizational foundation begins with a well-structured organizational setup. This often involves using a “layer model” to arrange the various responsibilities. The primary responsibility for ensuring the proper use of digital solutions rests squarely on the shoulders of first-line management. This can be assisted by a “risk & control” function that can act as a “second line” to help set up the right controls and perform risk assessments. The second line can also set up forms of monitoring on the correct implementation and use of the digital solutions. Then, as a third line, an internal audit function can assess whether the controls in and around the digital solutions are set up and working properly; if desired, the external audit function can confirm this as well. In short, a layered model emerges to collectively ensure the quality of digital solutions.

Given the tremendous speed of digital change, continuous new knowledge of technology is needed. Effectively coordinating this effort while maintaining a focus on the quality of solutions and acknowledging their inherent limitations is the key to successful governance. It is not a static entity, continuously changes in the chain has to be evaluated and adjusted if necessary. Conceivably, the IT function (the CIO or IT management) could organize a structural technology dialogue that starts with knowledge sessions, addressing the quality of digital applications. End users and management share the responsibility of clearly defining quality requirements, overseeing them through change processes, and ensuring the ongoing monitoring, or delegation of monitoring, to guarantee the quality of digital applications and data.

The suppliers of the digital solutions also play an important role. They have to be good stewards and provide better and safer solutions. This does not happen automatically, as is regularly the case; the focus is more on functional innovation than on good management and security. The buyers of the solutions also still question the providers too little about a “secure by design” offering. Proper controls can, and in fact should, already be built in during solution design.

Are the new digital solutions becoming so complex that no one can determine the correctness of the content? From a management perspective, we cannot take such a “black box” approach. We cannot accept, for example, deploying a digital application without knowing whether it works safely. Management should pause and prioritize organizing knowledge or acquiring information about the quality before justifying further deployment.

Challenges for the IT auditor

These quality issues can be answered by IT auditors. In the Netherlands, this field has been organized for more than thirty years, partly through the professional organization NOREA (Dutch Association of EDP Auditors)1 and university IT audit programs.

The IT auditor has a toolbox to assess digital solutions on various quality aspects. In increasing number of auditing and reporting standards have been developed to provide clients with assurances or a correct risk picture.

On the positive side, current IT auditing standards can already answer many questions from clients about digital solutions. The key is for IT auditors to adequately disclose what they can do and to work with regulators to enrich the tools. The IT auditor has to use simpler language to clarify what is really going on. Clients can and should sharpen their questioning and take responsibility themselves, such as establishing the right level of control.

IT auditors are currently still mainly looking for technically correct answers and methodologies, while a dialogue is needed about the relevant management questions concerning IT governance. What dilemmas do managers and regulators experience when determining the quality level of digital applications and what uncertainties exist? This is what the IT auditor should focus on. Starting from a clear management question, the IT auditor’s already available tools listed below can be used in a much more focused way.

From an auditing perspective, when outsourcing, the standard ISAE 3402 (ISAE: International Standards on Assurance Engagements)2 was developed to keep both the auditor and the client organization informed about the quality of the audits performed by the service organization. The emphasis lies on ensuring the reliability and continuity of financial data processing. The resulting report is called a SOC 1 report (SOC: Service Organization Control).

An ISAE 3402 audit requires proper coordination on the scope of work and the controls to be tested (both in design and in operational operation). The performing IT auditor consults with both the service organization and the receiving customer organization to arrange everything properly. This also involves specific attention to both the “Complementary User Entity Controls” (CUECs), the additional internal control measures that the customer organization must implement, and the “Complementary Subservice Organization Controls” (CSOCs), the control measures that their possibly deployed IT service providers must implement. Frequent consultations occur with the client organization’s auditor, who incorporates the ISAE 3402 report as an integral part of the audit process.

The scope of an ISAE 3402 audit can be significant and already provide a solid basis for quality assurance of digital applications. An example from IT audit practice involves a sold division of a company that is now part of another international group. The sold division has plants in over 30 countries, all of which still use the original group’s IT services. A test plan has been set up to test the relevant general computer controls (such as logical access security, change control and operations management, also known as “general IT controls”), and all relevant programmed financial controls in the selected financial systems. In this example, this yields a testing of over eighty general computer controls and over two hundred programmed controls by a central group audit team and audit teams in the various countries.

Another assurance report is an ISAE 3000 report, which is prepared to demonstrate that the internal management processes an organization has in place are actually being carried out as described. Basically, this standard was developed for assurances about non-financial information. This may take the form of an ISAE 3000 attestation (3000A), wherein the organization internally defines and reviews standards and controls, with the IT auditor subsequently confirming their effectiveness. Alternatively, it can manifest as a 3000D (“direct reporting”), involving collaborative definition of review standards and controls by both the organization and the IT auditor.

The ISAE 3000 report (also referred to as SOC 23) can focus on many issues and also has multiple quality aspects as angles, such as confidentiality and privacy. Standard frameworks have since been established for conducting privacy audits, for example ([NORE23])4 based on ISAE 3000. The North American accounting organizations, including AICPA, CPA Canada, and CIMA5, have collaboratively developed comprehensive standard frameworks, such as SOC 2 modules on Security, Availability, Processing Integrity, and Confidentiality6. These are readily applicable to IT and SaaS services and are increasingly being embraced by IT service providers in Europe. For specific IT audit objects, such as specifically delivered online services/functionalities, these can be further focused or expanded with IT (application) controls relevant to the customer organization.

As a final variant, agreed-upon specific work can be chosen, referred to as an ISAE 4400 report. Users of the report then have to form their own opinion about the activities and (factual) findings that are presented by the IT auditor in the report.

In recent years, there has been plenty of innovation within the field of IT auditing to also assess algorithms, for example, and make a statement about them. Consider the issue of fairness and non-biased data. An interplay between multiple disciplines unfolds to comprehend the risk landscape of intricate digital solutions and offer assurances. IT auditors are partnering with data specialists and legal experts to ensure the reliability of algorithms.

Over the past 18 months, there has been a growing discourse regarding the potential inclusion of an IT audit statement within or as an addition to a company’s annual report. Specifically, the company would need to articulate its stance on digital solutions, their management, and, for instance, the associated change agenda. An IT auditor could then issue a statement in this regard. The professional association of IT auditors has developed a plan of action to actively develop this IT report and the communication about it in the coming year. There is ongoing consideration regarding the level of assurance achievable through the opinion; currently, we acknowledge a reasonable and limited degree of assurance from the statement system. Clients naturally seek maximum or, perhaps better, optimal assurance. In other words, the assurance they seek is not always found in an IT audit statement. Even better would be if the communication also provides assurance into the future, an area still untrodden by IT auditors.


As indicated earlier, tools already exist for the IT auditor to confirm the quality of digital applications. Clients must take responsibility to better understand digital applications and set up the corresponding IT governance. IT auditors can improve their communication, can empathize even more with management’s (their clients’) questions, and also provide understandable reports.

Addressing pertinent social concerns related to the implementation of digital solutions involves conducting a comprehensive risk inventory and evaluating the effectiveness of the existing controls. In addition to the traditional concerns focused on reliability and security, issues of effectiveness, efficiency, privacy and fairness come into play. The resilience of digital solutions is also an urgent issue. In the EU, the Network and Information Security Directive (NIS2 Directive)7 and the Digital Operations Resilience Act (DORA)8 for financial institutions have been established to strengthen digital resilience. The regulator of publicly traded companies in the United States (SEC) has also issued guidelines for annual reporting on cyber security (risk management, governance) and interim reporting of serious incidents. ([SEC23]).

The concept of secure by design is anticipated to become increasingly prevalent, as technology vendors recognize the necessity of implementing robust controls during solution deployment. Some suppliers also provide mechanisms to set up continuous monitoring, where the controls put in place are assessed for continuous correct operation and exceptions are reported. Management also plays an important role in this regard; embrace the principles described above. Remember that it is more effective and efficient to design controls during the change of digital solutions than to fix them afterwards.

If more and more continuous monitoring is provided, the IT auditor can move toward a form of continuous auditing, providing assurances about the deployment of the digital solution at any time. The “anytime, anyplace, anywhere” principle then becomes a reality in IT auditing. A nice, relaxing prospect within all the digital speeds.


  1. See
  2. See, ‘Standards and resources’.
  3. SOC 2 deals primarily with security (mandatory), availability, integrity, confidentiality and/or privacy, as outlined in the SOC 2 guidelines issued by the Assurance Services Executive Committee (ASEC) of the AICPA.
  4. There is a Dutch and an English version of the Privacy Control Framework.
  5. AICPA: American Institute of Chartered Professional Accountants; CIMA: Chartered Institute of Management Accountants.
  6. See [Zwin21] for an article on SOC 2 and [AICP23] for AICPA and CIMA standards.
  7. See [NCSC23].
  8. See [Alam22] for an article on DORA.


[AICP23] AICPA & CIMA (2023). SOC 2® – SOC for Service Organizations: Trust Services Criteria. Consulted at:

[Alam22] Alam, A., Kroese, A., Fakirou, M., & Chandra, I. (2022). DORA: an impact assessment. Compact 2022/3. Consulted at:

[AR21] Algemene Rekenkamer (2021, 26 januari). Aandacht voor algoritmes. Consulted at:

[DeHa20] De Haes, S., Van Grembergen, W., Joshi, A., & Huygh, T. (2020). Enterprise Governance of Information Technology (3rd ed.). Springer.

[Gart20] Gartner (2020, 12 Augus). The CIO’s Guide to Distributed Cloud. Consulted at:

[ISAC19] ISACA (2019). COBIT 2019 or COBIT 5. Consulted at:

[KPMG20] KPMG (2020). Harvey Nash / KPMG CIO Survey 2020: Everything changed. Or did it? Consulted at:

[NCSC23] Nationaal Cyber Security Centrum (2023). Summary of the NIS2 guideline. Consulted at:

[NORE21] NOREA (2021). Nieuwe IT check: NOREA ontwikkelt IT-verslag en -verklaring als basis voor verantwoording. Consulted at:

[NORE23] NOREA (2023). Kennisgroep Privacy. Consulted at:

[Rijk19] Rijksoverheid (2019, 28 February). Nieuwe wet versterkt bestrijding computercriminaliteit. Consulted at:

[SEC23] SEC (2023, 26 July). SEC Adopts Rules on Cybersecurity Risk Management, Strategy, Governance, and Incident Disclosure by Public Companies [Press Release]. Consulted at:

[Wilr20] WilroffReitsma (2020). ICT Trends 2021: dit zijn de 10 belangrijkste.

[Zwin21] Zwinkels, S. & Koorn, R. (2021). SOC 2 assurance becomes critical for cloud & IT service providers. Compact 2021/1. Consulted at:

How does new ESG regulation impact your control framework?

Clear and transparent disclosure on companies’ ESG commitments is continually becoming more important. Asset managers are increasing awareness for ESG and there is an opportunity to show how practices and policies are implemented that lead to a better environment and society. Furthermore, stakeholders (e.g., pension funds) are looking for accurate information in order to make meaningful decisions and to comply with relevant laws and regulations themselves. Reporting on ESG is no longer voluntary, as new and upcoming laws and regulation demand that asset managers report more extensively and more in dept on ESG. As a result of our KPMG yearly benchmark on Service Organization Control (hereinafter: “SOC”) Reports of asset managers, we are surprised that, given the growing interests and importance of ESG, only 7 out of 12 Dutch asset managers report on ESG, and still on a limited scope and scale.


Before we get into the benchmark we will give you some background on the upcoming ESG reporting requirements for the asset management sector. These reporting requirements are mainly related to the financial statement. However, we are convinced that clear policies, procedures as well as a functioning ESG control framework are desirable to reach compliance with these new regulations. Therefore, we benchmark to what extent asset managers are (already) reporting on ESG as part of their annual SOC reports (i.e., ISAE 3402 or Standard 3402). We end with a conclusion and a future outlook.

Reporting on ESG

In this section we will provide you with an overview of the most important and relevant regulations on ESG for the asset management sector. Most of the ESG regulation is initiated by the European Parliament and Commission. We therefore start with the basis, the EU taxonomy, which we disclose high-over followed by more in detail regulations like Sustainable Finance Disclosure Regulations (hereinafter: “SFDR”) and Corporate Sustainability Reporting Directive (hereinafter: “CSRD”).

EU Taxonomy

In order to meet the overall EU’s climate and energy targets and objectives of the European Green deal in 2030, there is an increasing need for a common language within the EU countries and a clear definition of “sustainable” ([EC23]). The European Commission has recognized this need and has taken a significant step by introducing the EU taxonomy. This classification system, operational since July 12th, 2022, is designed to address six environmental objectives and plays a crucial role in advancing the EU’s sustainability agenda:

  1. Climate change mitigation
  2. Climate change adaptation
  3. The sustainable use and protection of water and marine resources
  4. The transition to a circular economy
  5. Pollution prevention and control
  6. The protection and restoration of biodiversity and ecosystems

The EU taxonomy is a tool that helps companies disclose their sustainable economic activities and helps (potential) investors understand whether the companies’ economic activities are environmentally and socially governed sustainable or not.

According to EU regulations, companies with over 500 employees during the financial year and operating within the EU are required to file an annual report on their compliance with all six environmental objectives on 1 January of each year, starting from 1 January 2023. The EU ESG taxonomy report serves as a tool for companies to demonstrate their commitment to sustainable practices and to provide transparency on their environmental and social impacts. The annual filing deadline is intended to ensure that companies are regularly assessing and updating their sustainable practices in order to meet the criteria outlined in the EU’s ESG taxonomy. Failure to file the report in a timely manner may result in penalties and non-compliance with EU regulations. It is important for companies to stay informed and up-to-date on the EU’s ESG taxonomy requirements to ensure compliance and maintain a commitment to sustainability.


The SFDR was introduced by the European Commission alongside the EU Taxonomy and requires asset managers to disclose how sustainability risks are assessed as part of the investment process. The EU’s SFDR regulatory technical standards (RTS) came into effect on 1 January 2023. These standards aim to promote transparency and accountability in sustainable finance by requiring companies to disclose information on the sustainability risks and opportunities associated with their products and services. The SFDR RTS also establish criteria for determining which products and services can be considered as sustainable investments.

There are several key dates that companies operating within the EU need to be aware of in relation to the SFDR RTS. Firstly, the RTS is officially applied as of 1 January 2023. Secondly, companies are required to disclose information on their products and services in accordance with the RTS as of 30 June 2023. Lastly, companies will be required to disclose information on their products and services in accordance with the RTS in their annual financial reports as of 30 June 2024.

It is important for companies to take note of these dates as compliance with the SFDR RTS and adhering to the specified deadlines is crucial for companies. Failure to do so may again result in penalties and non-compliance with EU regulations. Companies should also stay informed and keep up with the SFDR RTS requirements to ensure that they are providing accurate and relevant information to investors and other stakeholders on the sustainability of their products and services as these companies are required to disclose part of this information as well.


The CSRD is active as of 5 January 2023. This new directive strengthens the rules and guidelines regarding the social and environmental information that companies have to disclose. In time, these rules will ensure that stakeholders and (potential) investors have access to validated (complete and accurate) ESG information in the entire chain (see Figure 1). In addition, the new rules will also positively influence the company’s environmental activities and drive competitive advantage.


Figure 1. Data flow aggregation. [Click on the image for a larger image]

Most of the EU’s largest (listed) companies have to apply these new CSRD rules in FY2024, for the reports published in 2025. The CSRD will make it mandatory for companies to have their non-financial (sustainable) information audited. The European Commission has proposed to first start with limited assurance upon the CSRD requirements in 2024. This represents a significant advantage for companies as limited assurance is less time consuming and costly and will give great insights in the current maturity levels. In addition, the Type I assurance report (i.e., design and implementation of controls) can be used as a guideline to improve and extend the current measures to finally comply with the CSRD rules. We expect that the European Commission will demand a reasonable assurance report as of 2026. Currently, the European Commission is assessing which Audit standard will be used as the reporting guideline.

Specific requirement for the asset management sector

In 2023 the European Sustainability Reporting Standards (ESRS) will be published in draft by the European Financial Reporting Advisory Group (hereinafter: “EFRAG”) Project Task Force for the sectors Coal and Mining, Oil and Gas, Listed Small Medium Enterprises, Agriculture, Farming and Fishing, and Road Transport ([KPMG23]). The classification of the different sectors is based on the European Classification of Economic Activities. The sector-specific standards for financial institutions, which will be applicable for asset managers, are expected to be released in 2024, although the European Central Bank and the European Banking Authority both argue that the specific standards for financial institutions is a matter of top priority due to the driving force of the sector regarding the transition of the other sectors to a sustainable economy ([ICAE23]). We therefore propose that financial institutions start analyzing the mandatory and voluntary CSRD reporting requirements and determine – based on a gap-analysis – which information they already have versus what is missing and start working on that. 

Reporting on internal controls

European ESG regulation focusses on ESG information in external reporting. However, no formal requirements are set (yet) regarding the ESG information and data processes itself. In order to achieve high-quality external reporting, control over internal processes is required. Furthermore, asset managers are also responsible for the processes performed by third parties, e.g., the data input received from third parties. It is therefore important for an asset manager to gain insight in the level of maturity of the controls on these processes as well.

Controls should cover the main risk of an asset manager that can be categorized a follows:

  • Inaccurate data
  • Incomplete data
  • Fraud (greenwashing)
  • Subjective/inaccurate information
  • Different/unaligned definitions for KPIs

In order to comply with the regulations outlined in Figure 1, it is recommended to include the full scope of ESG processes in the current SOC reports of asset managers. Originally, the SOC report is designed for providing assurance on processes related to financial reporting over historical data. In our current society, we observe that more and more attention is paid to non-financial processes. We see that the users of the SOC reports are also requesting and requiring assurance over more and more non-financial reporting processes. We observe that some asset managers are including processes such as Compliance (more relevant for ISAE3000A), Complaints and ESG in their SOC reports. KPMG performed a benchmark on which processes are currently included in the SOC reports of asset managers. We will discuss the results in the next paragraph.


By comparing 12 asset management SOC reports for 2022, KPMG observed that 6 out of 12 asset managers are including ESG in their system descriptions (description of the organization), and 7 out of 12 asset managers have implemented some ESG controls in the following processes:

  • Trade restrictions (7 out of 12 asset managers)
  • Voting policy (4 out of 12 asset managers)
  • Explicit control on external managers (4 out of 12 asset managers)
  • Emission goals / ESG scores (1 out of 12 asset managers)
  • Outsourcing (0 out of 12 asset managers)

We observe that reporting is currently mostly related to governance components. There is little to no reporting on environmental and social components. In addition, we observe that none of the twelve asset managers report on or mention third party ESG data in their SOC reports.

We conclude that ESG information is not (yet) structurally included in the assurance reports. This does not mean that ESG processes are not controlled; companies can have internal controls in place that are not part of a SOC report. In our discussion with users of the assurance reports (e.g. pension funds) we get feedback that external reporting on ESG related controls is perceived as valuable given the importance of sustainable investing and upcoming (EU) regulations. Based on our combined insight from both ESG Assurance and advisory perspective we will share our vision on how to report on ESG in the next paragraph.

Conclusion and future outlook

In this article we conclude that only 7 out of 12 asset managers are currently reporting on ESG-related controls in their SOC reports, and still on a limited scope and scale. This is not in line with the risks and opportunities associated with ESG data and not in line with active and upcoming laws and regulations. We therefore recommend that asset managers enhance control on ESG by:

  • implementing ESG controls as part of internal control framework (internal reporting);
  • implementing ESG controls as part of their SOC framework (external reporting);
  • assessing and analyzing with your external (data) service providers and relevant third parties regarding missing controls on ESG.

The design of a proper ESG control framework first starts with a risk assessment and the identification of opportunities. Secondly, policies, procedures and controls should be put in place to cover the identified material risks. These risks need to be mitigated in the entire chain, which means that transparency within the chain and frequent contact among the stakeholders is required. The COSO model (commonly used within the financial sector) could be used as a starting point for a first risk assessment, where we identify inaccurate data, incomplete data, fraud, inaccurate information and unaligned definition of KPIs as key risks. Lastly, the risks and controls should be incorporated within the organizational annual risk cycle, to ensure quality, relevancy, and completeness. Please refer to Figure 2 as an example.


Figure 2. Example: top risks x COSO x stakeholder data chain [Click on the image for a larger image]


[EC23] European Commission (2023, January 23). EU taxonomy for sustainable activities. Retrieved from:

[ICAE23] [ICAE23] ICAEW Insights (2023, May 3). ECB urges priority introduction of ESRS for financial sector. Retrieved from:

[KPMG23] KPMG (2023, April). Get ready for the Corporate Sustainability Reporting Directive. Retrieved from:

Automation and IT Audit

The introduction of IT in organizations was rather turbulent. Under the motto “First there was nothing, then the bicycle came”, everyone had to learn to deal with the effects of IT on people and processes. This article describes the development of IT and the role of auditors related to the investigation into the quality of the internal control on behalf of the financial audit. IT-related laws and regulations are discussed as well as KPMG’s involvement with professional organizations and university study programs.

For the Dutch version of this article, see: Automatisering en IT-audit

The beginning of IT

Since 1965, computers were introduced in organizations, mostly for simple administrative applications. In this period the accountancy profession had a slight wake-up call when during control work at audit clients (assistant) auditors desperately mentioned: “They bought a computer.” At that time, there was no IT organization yet whatsoever.

Start automation with batch processing

Simple processes were automated and processed by computers (that meanwhile had been introduced by IBM, BULL, and UNIVAC) that could only perform one process at a time. The responsibility for automation lay almost always with the administrative function of the organization.

The required programs were written in programming languages such as assembler or COBOL. The elaboration of the required functionality occurred on pre-printed forms after which the programmers themselves had to record the instructions on punch cards. These large quantities of punch cards were read in the computer center and recorded on magnet tapes, after which processing took place. The output was printed on paper. The same process was used for the processing of mostly administrative data. The computer was controlled through the so-called Job Control Language (JCL). Computer operators could initiate operations with the aid of these JCL processes.

In time, complexity grew and the expert in the area of computer control programs – the systems programmer – entered the scene. Both the quality and effectiveness of the internal control measures within organizations were pressured as this new function of system programming could manipulate the results of processing out of sight of the internal organization.

The accountancy profession quickly acknowledged that automation could be of influence on the quality of the system of internal control measures within organizations. Already in 1970, the Netherlands Institute of Register Accountants (NIVRA1), issues Publication number 1 with as its title Influence of the administrative automation on the internal control. That same year the Canadian Institute of Chartered Accountants issues the book Computer Control Guidelines, followed in 1974 by Computer Audit Guidelines. In 1975, the NIvRA Publication 13 follows: The influence of automated data processing on the audit.

Use of the computer in the audit

It was a local step for an auditor to quickly choose to use the client’s computer or an in-house computer to provide the client with the required information on behalf of the audit. Standard packages such as AudiTape were marketed. Within KPMG, a department was created in 1971 entitled Automation & Control Group with programmers that ensures that the audit control practice was fully equipped. Next to the much-used statistical “currency ranking” sampling method, a better method is developed, called the sieve method.

Needless to say, it was stressed that there is a need for the audit client to attend the processing of the software developed by auditors or standard audit software used.

The development of the COMBI tool (Cobol Oriented Missing Branch Indicator) within KPMG offers the possibility using test cases to acknowledge the “untouched branches” in a program, which can be applied efficiently during the development phase of programs.

Foundation of KPMG EDP Auditors

After a short start-up phase of a few years, in which specialized accountants used audit software on computers at audit clients, the Automation & Control (AC) group was established in the period 1971-1973. This group consisted of financial auditors with IT affinity (which were trained and rotated every three years) and programmers for the development of queries and audit software, such as the abovementioned COMBI.

In 1974, it was decided to establish a separate organizational unit entitled KPMG EDP Auditors (KEA, hereinafter KPMG). The attention of the auditor moved to engaging (IT Audit) experts, who had to establish whether the embedded system of measures of internal control in the organization was also anchored in the configuration of the information system. This also happened with respect to acquiring certainty that application programs developed by/under responsibility of the user organization would be processed unchanged and in continuity.

Specialized knowledge of the auditor’s organization was required due to the complexity arising from the introduction of large computer systems with online/real-time facilities, database management systems and standard access control software (COTS, Commercial-Off-The-Shelf). After all, the organization must be able to identify the impact that this new technology will have on internal controls and be able to recognize the implications for the auditor’s work.

It is in that context that in 1974 it was decided to issue a professional journal entitled Compact (COMPuter and ACcounTant). Initially, it was primarily intended to inform the (financial) audit practice, but was increasingly highly appreciated by other organizations, mainly audit clients.

Introduction of complex computer systems

Since 1974, the application and usage of computers accelerated as the new computers could perform multiple tasks simultaneously. In addition, an infrastructure was created that allowed the user organization to collect data directly. The IT organization therefore became an independent organizational unit and was usually positioned within the Finance hierarchy.

IBM introduced the operating system MVS (Multiple Virtual Systems) and shortly after (1975) software under the collective name of Data Base Management Systems (DBMS) was marketed. The emphasis of IT applications was placed on on-line/real-time functionality. Other computer suppliers introduced similar systems.

The efforts of auditors aimed at assessing the quality aspects of automation initially focused mainly on assessing the measures of physical security of the computer center and the availability of a tested emergency plan.

When the development of batch environment to online/real-time environment continued, the importance of logical security, as well as quality of procedures, directives and measures in the automation organization came to the fore. Think of the arrangement of access control; back-up, and recovery procedures; test, acceptance and transfer procedures of application software; library management, etc.

The introduction of complex computer systems not only meant a migration from classically organized data to a new IT environment, but also a migration of control measures to higher software layers (access control systems, sub schemes within DBMS’s). The entire data conversion project from the classical IT environment to on-line/real-time necessitated a sound planning of the conversion, define phases, set-up procedures for data cleansing, determining completeness and correctness of data collection and security measures during the entire project.

Many Compact issues have discussed this complexity of the above-mentioned profound events and the impact on the internal and financial audits.

Minicomputer systems

More or less simultaneous with the introduction of large complex mainframe computer systems, smaller computer systems were introduced. As these became bigger, they were called mid-range computers.

For the KPMG organization this meant further specialization, as the introduction of minicomputer systems in SME organizations usually had different consequences for the design of the system of internal controls and for the security measures to be taken in these organizations.

KPMG authors successively published articles in Compact with subjects such as the reliability of administrative data processing with minicomputers; the decentral use of small computers: a new problem, and also: the influence of the implementation of small-scale automation on the audit.

Newer versions of mid-range computer systems have a more complex architecture which enables better possibilities for realizing a securely operating IT organization. Especially the security options for the popular IBM AS/400 system were extensively published.

In addition, the security of PC systems and end-user computing was addressed. A Compact Special was entirely devoted to the manageability and security due to the increasing use of PC networks (including risks, methods, audit programs and tooling).

Auditing of information systems

Atypical of automation is that for data processing, the development of hardware and supporting software occurs (almost) simultaneously. Since the beginning of the seventies, auditor, and more specifically the experts specialized in IT auditing, also focused on audit information systems to verify whether internal control measures were not only correctly configured in application programs, but also in the underlying operating system software.

A research methodology entitled “System Assessment approach & financial audit” was developed early on in The Netherlands and periodically updated further to frequent usage. Later – in 1998 – this methodology was followed up by the internationally adopted Business Process Analysis (BPA) method.

The rapid increase of electronic payments and the mapping of its consequences for the manageability as a result of these new developments should be mentioned as well as the discussions about the use of encryption and possible consequences of legislation.

The quality of the system development organization was also investigated. This development ultimately led to Enterprise Resource Planning (ERP) systems as a further integration of applications occurred, and subsequently, to improve the control of ERP Projects, Quality Assurance measures were introduced. In literature, both underexposed management aspects were discussed at ERP implementations and the complexity of defining and implementing approvals.

E-business advances rapidly too. Electronic payments on the Internet become more or less self-evident, e-mail conventions were developed. Assessing the critical infrastructural security mechanisms, such as Public Key Infrastructure (PKI), to which end a national framework of audit standards and procedure needed to be developed, became important to IT Auditors. The KPMG PKI standards framework was later adopted internationally in the WebTrust standard. Above all, KPMG focused on the assessment of risk management and E-business environments.

Information Security Policy

Information Security has been in the spotlight ever since the start of the automation of data processing. Since the early eighties, the subjects of organizational security, logical security, and physical security (see Figure 1), as well as back-up, restart and recovery and fallback options were considered in conjunction.


Figure 1. Class about Information Security (Dries Neisingh at the University of Groningen). [Click on the image for a larger image]

In the 90s, the Information Security policy was highlighted as a sound foundation for information protection. Since then, many KPMG authors have shared their knowledge and experience of Information Security in almost all Compact volumes.

Artificial Intelligence and Knowledge Based Systems

At the beginning of the eighties, an investigation was started within KPMG examining the possibilities of the use of Artificial Intelligence (AI) in the audit. In 1985 “Knowledge-Based Systems (KBS): a step forward into the controllability of administrative processes” was introduced as a result of, amongst others, the developments in AI, higher processing speeds and larger memories. The KBS software does not contain human-readable knowledge but merely algorithms that can perform (in the rule-base) processes based upon externally stored knowledge.

In the following years, there were new developments, as evidenced by publications on Structured Knowledge Engineering (SKE), developed by Bolesian Systems. Further to the above, KPMG published about “Control software and numerical analysis” and about “Testing as a control technique”.

Microcomputer in the audit

After the successful growth of the use of computers in the (financial) audit, the attention partly spread to the use of the microcomputer in the audit. In 1983, an automated planning system became operational. Subsequently, a self-developed audit package was demonstrated with which file researches could be executed.

The use of the micro in organizations to support administrative information processing was extensively published, as well as its use at the auditor as a control tool. The micro was therefore used both as stand-alone and as part of a network.

Within KPMG, two projects were being started, notably the development of software for connecting the computer of the audit client with the auditor’s micro and the development of control programs for processing on the auditor’s micro. KPMG’s Software Engineering department researches software engineering, operating systems (e.g. UNIX), computer viruses, electronic payment, and the debit card.

IT outsourcing

Organizational scale and/or financial capacity sometimes mean that automated data processing was being outsourced to computer/IT service organizations that usually make use of available standard packages on the market. Especially IT outsourcing grew rapidly in the nineties and early this century.

Jointly founded IT organizations – as a shared service center – arise as well. An example is the founding of six computer centers spread across the country on behalf of healthcare insurers administrations. Each health care insurer uses the same functionality and was on-line/real-time connected to one of the regional computer centers. From the start of this special cooperation, KPMG was involved as IT Auditor for overall quality assurance. Several opinions were issued on the quality aspects of the newly established IT organization, as well as the effective operation in continuity of these organizations and on the automated business rules and controls of the software. After all, the health insurance funds themselves carried out the user controls.

NBA publication 26 entitled Communications by the auditor related to the reliability and continuity of automated data processing, paid attention to these problems. Later, Publication 53 was published regarding Quality opinions on information services. In practice these were named Third Party Statements (TPMs).

IT-related laws and regulations

Inspection and certification of IT

Since the beginning of the 80s, the subject of IT inspections and certifications regularly popped up on the agenda. The foundation “Institute to Promote Inspection and Certification in the Information Technology”2 was established. The Netherlands Standardization Institute (NNI3) was already working on a standard for the quality system for software. Within KPMG, much attention was paid to the possibility of issuing opinions on software quality systems, but also for the certifying of software and development systems. Compact, for instance, published widely on the issues at hand.

Finally, the foundation KPMG Certification was established. In January 1998, it officially received the charter “Accreditation of BS 7799 Certification”4, because by the end of 1997, ICS (International Card Services, now part of ABN AMRO Bank) had received the first Dutch certificate for this international Information Security standard.

In November 2002, the above accreditation of KPMG Certification was followed by the first accreditation and certification of PinkRoccade Megaplex (now part of telecommunications company KPN) for the certification scheme “Framework for certification of Certification Authorities against ETSI TS 101456”. This refers to the servicing of digital certificates for making use of digital IDs and signatures. Today, this is comparable to the eIDAS certification.

Memorandum DNB

The Memorandum “Reliability and continuity of automated data processing in banking” published by the Dutch Central Bank (DNB) in 1988 was in itself no revelation. Since the start of KPMG’s IT Audit department, specialized IT Auditors were deployed in the audit practice of financial institutions, related to the assessment of internal controls in and around critical application software, and measures taken in the IT organization.

Various Compact issues show that the IT Audit involvement was profound and varied. My oration at the University of Groningen in 1991 entitled “There are banks and banks” critically contemplates this Memorandum.

It is worth noting that in April 2001, the DNB presented the Regulation Organization and Control (ROB), in which previous memoranda such as the one regarding outsourcing of automated data processing were incorporated.

Computer crime

Since the mid 70s, publications under the heading “Computer Abuse” increasingly appear. Several “abuse types” were subsequently described in Compact. The subject remains current.

In November 1985, the Minister for Justice installs the Commission “Information technology and Criminal Law” under the presidency of Prof. H. Franken. KPMG was assigned by this commission to set up and perform a national survey among business and government to acquire insight (anonymously) in the quality and adequacy of internal controls and of security controls in IT organizations.

In the report that appears in 1987, the image sketched about the quality and effectiveness of the measures taken was far from reassuring, both in small and in large IT environments. The committee’s conclusion was therefore (all things considered) to make criminalization of computer-related crime applicable, taking into account the findings presented, if there were “breaches in secured work”.


The creation of laws and regulations regarding privacy (as the protection of personal data) has a long history. At the end of 1979, the public domain was informed in a symposium called “Information and automation”, which focused on the imminent national and international legislation regarding data protection, privacy protection and international information transport.

Subsequently, Compact was being used as an effective medium for employees and especially clients and relations to inform them on developments. In cooperation with the then Data Protection Authority5 a “New brochure on privacy protection” was issued by KPMG further to the Data Protection Act (Wpr) being enacted in 1988. Especially since 1991 there were many publications on privacy authored by KPMG employees. KPMG also conducted the first formal privacy audit in the Netherlands together with this privacy regulator.

In 2001, the new Dutch Data Protection Act (Wbp) replaced the Wpr – due to the EU Data Protection Directive 95/46/EC. At that time, an updated Privacy Audit Framework was also introduced by a partnership of the privacy regulator with representatives from some public and private IT auditing practices, including KPMG.

An interview published in Compact 2002/4 with the chairman of the Board entitled “Auditor as logical performer of Privacy audits. Personal Data Protection Board drives Certification”.

Transborder data flow

Already in 1984, an investigation was performed into the nature and scope of data traffic crossing borders and especially into the problems and legislations in several countries.

In 1987, KPMG and the Free University of Brussels published the book entitled Transborder flow of personal data; a survey of some legal restrictions on the free flow of data across national borders. The document consisted of an extensive description per country of the relevant national legislation, from Australia to Switzerland and the OECD Guideline. It discussed the legal protection of personal data, the need for privacy principles and the impact of national and international data protection laws on private organizations.


The use of encryption rapidly increased partly because of the introduction of (international) payment systems. Other applications of encryption also took place, such as external storage of data devices. In 1984, the Ministry of Justice considered initiating a licensing system for the use of encryption in case of data communication. The granting of such licenses should also be applied in the case of the use of PCs, whether or not included in a network.

Partly further to the outcome of KPMG’s investigation “Business Impact Assessment cryptography” and pressure from the business community, any form of encryption regulation was refrained from.

Legal services

Expanding KPMG product portfolio with legal IT services was a logical consequence of above developments, which occurred since 1990 with the recruitment of lawyers with IT and Information specialization. The regulatory developments not only referred to the above-mentioned legal subjects, but also to the assessment and advise of contracts for the purchase of hardware, software packages and to the purchase of software development as well as escrow (source code depository), dispute resolution, with probative values of computer materials and copyright, etc.

The Compact Special 1990/4 was already entirely devoted to the legal aspects of automation. In 1993, due to the developments in IT and law, KPMG published a book entitled 20 on Information Technology and law. KPMG authors and leading external authors contributed articles. In 1998, one of the KPMG IT lawyers obtained her doctorate with her PhD thesis Rightfully a TTP! An investigation into legal models for a Trusted Third Party. The legal issues had and still have many faces and remain an important part of servicing clients.

IT Audit and Financial Audit

The relationship between the IT Audit and the Financial Audit practice has been strengthened over the years. As organizations started using IT more intensively in (all) business processes the meaning of anchoring the internal control and security measures in the IT environment became inevitable. Determining that the system of internal controls in organizations was anchored in continuity in the IT environment required employing IT Audit expertise. Initially, the IT Audit was supporting the audit; however, the influence and meaning of the use of IT in organizations became so immense that seemingly solely employees with both a RE and RA qualification would be capable to perform such an audit.

The publication of the Compact article 2001/3 entitled “Undivided responsibility RA for discussion: IT-auditor (finally) recognized” dropped a bombshell within the accountancy profession. Many KPMG authors published leading articles on the problem. The subject had already been considered in 1977 with the publication of the article “Management, Electronic information processing and EIV – auditor”. “Auditor – Continuity – Automation and risk analysis” is extensively covered in 1981. From 1983 onwards, articles on audit and ICT (Information and Communication Technology) were published quite regularly.

In recent years, Compact has explored this decades-long relationship between IT auditing and financial auditing in several articles, such as in Compact Special 2019/4 “Digital Auditing & Beyond”. In this Compact issue, the article by Peter van Toledo and Herman van Gils addresses this decades-long relationship.

The broadened field of IT Audit(or)

Over the years, it became clear that the quality on the general internal, security, and continuity controls significantly affected the possibility to control the IT organization and with it the entire organization and its financial audit. Subsequently, the effectiveness and efficiency of the general IT controls system attracted in-dept attention.

From the 80s onwards, KPMG’s Board decided to broaden the service offering by also employing (system) programmers next to auditors with IT expertise, as well as administrative information experts, computer engineers and the like. And finally, even (IT) lawyers. Consequently, a wide range of services arose. The KPMG organization’s pioneering role within the industry also served as a model for the creation of the professional body NOREA.

As the integration of ICT continued to take shape, a further expansion of knowledge and services in that direction took place. Some employees obtained their PhD (i.c. promotion to dr.) or an additional legal degree (L.LM), and some even became (associate) professor.

The Chartered Accountants associated with KPMG all were a member of the NIVRA, now the NBA. The activities employed in this organization on behalf of the audit practice were mentioned before. It took quite some time before, in addition to NIVRA, the professional association of EDP Auditors would be established (1992). The admission requirement of membership of the Netherlands Order of Chartered EDP Auditors (NOREA) was to have completed a two- or three-year EDP Audit study at one of the three universities that offered this new degree. Of course there were transitional arrangements for those who had proven knowledge, expertise, and experience. Like NiVRA, a Board of Discipline was installed at NOREA.

Within NiVRA there was much interest in the development of IT and its consequences for the financial audit. However, the expertise as far as IT was concerned, was initially mostly concentrated in the Netherlands Society for Computer Science of IT professionals (NGI6), in which KPMG played an important role in various working groups such as “Policy and risk analysis”, “Physical security and fall-back”, “Security supported by application software”, “architecture”, “Privacy protection and EDP Audit”.

University studies

A prominent practice like KPMG has a mission to also provide a stimulus to research and education in the field. Therefore, KPMG has made an important contribution over the years to university education in the area of both EDP Auditing and on the influence of the use of ICT on the control of organizations and on the financial audit.

It meant the development of an EDP Audit study program and on the other hand the setting up of new university chairs / professorships in the area of IT Audit and administrative organizations.

  • Already in 1977, Dick Steeman was appointed at the Erasmus University Rotterdam. Steeman took office as extraordinary professor with pronouncing the public lesson “Management, Electronic information processing and EIV-auditor”.
  • In 1990, Dries Neisingh was appointed professor at the University of Groningen, department Accountancy, chair “reliability aspects automated information systems”. The speech’s subject was “the Memorandum regarding reliability and continuity of automated data processing in banking (Memorandum DNB): there are banks and banks”.
  • At the beginning of 1991, the appointments of professor EDP Auditing at the Erasmus Universiteit of Cor Kocks and Hansen Moonen at the Tilburg University followed.
  • In 1994, professor Ronald Paans joined KPMG. He already was a professor EDP Auditing at the VU Amsterdam (Free University).
  • In 2002, dr. Edo Roos Lindgreen was appointed professor “IT in Auditing” at the University of Amsterdam. In 2017 he was appointed professor “Data Science in Auditing”.
  • In 2004, dr. Rob Fijneman became professor IT Auditing at Tilburg University.

Figure 2 shows the management of KPMG’s IT Audit practice in 1987 with some of the above-mentioned people.


Figure 2. Management of KPMG’s IT Audit practice upon retirement of Han Urbanus (in 1986). From left to right Dick Steeman, Dries Neisingh, Hans Moonen, Tony Leach, Han Urbanus and his wife, Jaap Hekkelman (chairman of NGI Security), Cor Kocks and Herman Roos. Han Urbanus and Dick Steeman jointly founded KPMG EDP Auditors and started Compact magazine. [Click on the image for a larger image]


The introduction of Compact in April 1974 was an important initiative of KPMG’s IT Audit Board. The intention was to publish an internal publication on IT subjects on a regular basis. The standard layout became primarily one or a few technical publications, IT developments, ABC News (Automation, Security, and Control), new books and articles included in the library and finally “readers comments”. In the first years, ABC News articles were frequently drawn from EDPACS7 magazine and other international publications.

The first issue started with the article “the organization of testing” and a contemplative article about “software for the benefit of the financial audit: an evaluation”. In the second issue, the first article was continued with subjects such as test monitoring, acceptance tests and system implementation.

Over the years, Compact becomes increasingly widespread: both clients and potential clients appear highly satisfied with the quality of the articles and the variety of subjects. Compact developed into a professional technical magazine! The authors were KPMG employees with occasionally contributions of external authors.

Since 1983, articles regularly addressed the relationship between Audit and IT. In Compact Specials, the editorial staff renders the meaning of such a special issue: “as usual every year a Special appears on audit and IT Audit. In the meantime, it has become habitual to confront CPAs and Register EDP Auditors (RAs, REs and RE RAs) with the often-necessary deployment of EDP Auditors in the financial audit practice after the completion of the audit of the financial statements and after the cleaning of files and prior to the layout of files for the new year”.

On the occasion of 12.5 years of Compact on Automation & Control, the book 24 about EDP Auditing was published in 1986. The book contained a bundle of updated articles from Compact, written by 24 authors. The preface started with a quote by Benjamin Disraeli: “the best way to become familiar with a subject is to write a book about it”.

Increasingly, Compact Special issues were published. In 1989, a Special appeared on “Security” and in 1990 on “The meaning of EDP Auditing for the Financial auditor”. Five external authors from the business also contributed to this Special also as well as a public prosecutor and Prof. mr. H. Franken.

In the run up to the 21st century, it became rapidly clear for many organizations and more especially for the IT sector as well as for EDP Auditors, that problems would definitely arise at the processing of data by application software as a result of the millennium change. Compact became an important medium to attract attention to this both internally and externally. Compact 2000/1 looks back with the article “Across the threshold of the year 2000, beyond the millennium problem?”.

The anniversary issue 25 years of Compact appeared in 1999/2000. Of the 57 authors 50 were employed by KPMG in various functions as well as seven external authors (among them a few former employees). It was a dashing exploit: a publication of 336 pages with 44 articles. The introductory article was called “From automation and control to IT Audit”. In the article “essential assurance over IT” largely goes through the clusters of articles.

Barely recovered from the millennium problem, the introduction of the euro presented itself. In Compact 2000/1 attention was paid to the introduction of the euro entitled “and now (again) it is time for the euro”. The Compact issues 2000/5 and 2000/6 were entirely devoted to all aspects of the conversion to the euro. Articles were being published under the header: “Euro conversion: a sound preparation is not stopping at half measures” and “Implement the euro step by step: drawing up a roadmap for the transition”. And: “Validating the euro conversion”, “Euro emergency scenarios” and “Review of euro projects”.


In the thirty years that were briefly reflected in this article, a lot has happened in the area of the development and application of IT in business and government. For (financial) auditors, it was not easy to operate in this rapidly changing environment. Training courses were not available, and knowledge was sparsely present within or outside the organization.

KPMG has taken the lead to making problems accessible for accountants by the creation of KPMG EDP Auditors and the simultaneous start of publishing Compact magazine. In addition to that, next to auditors, different types of IT professionals were also recruited. Many are to be thanked (the promoters and the successive generations) for the fact that with the start of KPMG EDP Auditors and the broadening of knowledge areas, the emerging market demand could be served adequately. KPMG has timely facilitated that sufficient time and investments could be leveraged for education and product development; this is why KPMG EDP Auditors could lead the way in the market.

The thirty years (1971-2002) have flown by. A period in which many have contributed and can look back with satisfaction. This is especially true for the author of this article who has summarized an earlier article of almost sixty pages.


  1. Currently, the Royal Netherlands Institute of Chartered Accountants (NBA).
  2. Original name: “Stichting Instituut ter bevordering van de keuring en Certificatie in de Informatie Technologie (ICIT)”.
  3. Currently the Netherlands Standardization Institute is named NEN: NEtherlands Norm.
  4. Currently known as ISO 27001 accreditation.
  5. The Data Protection Authority as had different names, aligned to the prevailing Privacy Act. Currently it is named Authority Personal Data (in Dutch: “Autoriteit Persoonsgegevens”), before that the Personal Data Protection Board (in Dutch: “College Bescherming Persoonsgegevens”) and initially the Registration Office (in Dutch: “Registratiekamer”).
  6. Currently the KNVI, the Royal Netherlands Society of Information Professionals
  7. EDPACS was issued by the EDPAA (EDP Audit Association); currently, the ISACA Journal is published by ISACA, the International Security Audit Controls Association.

Spanning fifty years of IT & IT Audit with only four Editors-in-Chief

To commemorate the fifty-year milestone of Compact, the acting Editor-in-Chief interviewed his three predecessors. The early years and history of fifty years of Compact are covered, as well as their expectations for the future of Compact as disseminator of knowledge and good practices.

Editors-in-Chief of Compact magazine


Dick Steeman, retired, Editor-in-Chief 1974 – 1994

Dries Neisingh, retired, Editor-in-Chief 1994 – 2002

Hans Donkers, ex-partner KPMG, founder WeDoTrust, Editor-in-Chief 2002 – 2015

Ronald Koorn, partner KPMG, Editor-in-Chief 2015 – current

What were remarkable developments in your Compact era?

We started with Compact when the punch cards where still around, while financial institutions and multinationals began to use new IBM systems with keypunch and programming capabilities (S/360, S/3) that were far more efficient in automating their massive administrative processes. Initially, the accountants used their own computer “for auditing around the computer”. In the early days, the audit focus was on data centers and the segregation of duties within IT organizations.

The knowledge of programming lacked at accounting firms in the seventies, therefore we first wrote articles on programming, testing and data analytics for our Financial Audit colleagues. Clients such as Heineken, KLM and ABN AMRO were keen on obtaining Compact as well. That’s how the magazine expanded. Due to the influence of Herman Roos and KPMG’s Software Engineering unit, Compact articles also addressed more technical subjects. So, the target group broadened beyond financial/IT auditors to IT specialists, IT directors and CFOs/COOs.

A nice anecdote is that when we issued Compact editions internally within KPMG the first few years, we were even proactively approached by the publishing company Samsom (now Wolters Kluwer) to offer their services for publication and distribution. We contractually had to issue four editions annually, which was in some years challenging to accomplish – especially besides all regular work. In other years, we completed four regular editions as well as a Compact Special for upcoming IT-related developments, such as Euro migration, Y2K (Millennium), ERP systems or new legislation (e.g., Privacy and Computer Criminality).

In 2001, we’ve issued our first international Compact edition (coordinated by the interviewer), as we wanted to address international variations and best practices. It was distributed to 25 major KPMG countries for their clients. Although, several non-native English authors overestimate their English writing proficiency.

Compact has always been focused on exchanging good practices and organizations are quite keen on learning from leading companies and their peers. Therefore, we changed the model from a – partly paid – subscription model, where authors were paid as well, via a controlled circulation model to a publicly available magazine. Writing articles was also an excellent way for less experienced staff to dive into a subject and structure it well for (novice) readers to understand. Of course, we’ve also been in situations where we had to hunt for (original) copy and actively entice colleagues to showcase their knowledge and experience in order to adhere to our quarterly publishing schedule. Several authors never completed their epistle, but luckily we always managed to publish a full edition.

We’re all pleasantly surprised by the current depth and quality and that Compact survived this long!

The name Compact was derived from COMPuter & ACcounTant. What do you see as the future target audience?

Besides the traditional target groups of IT auditors, IT consultants and IT-savvy financial auditors, it is also very useful for students. They can supplement their theoretical knowledge with practical examples of how technology can be applied in a well-controlled manner in a business context. There still are very few magazines highlighting the subjects that Compact addresses, such as IT Governance and IT Quality.

At least accountants (CPAs) need to know about IT due to the criticality of their financial audits, they cannot entirely outsource that to IT auditors. They should also address in their Management Letter whether “IT is in Control”. Of course, Compact is and should remain a good medium for communicating good practices to CFOs, CIOs and CEOs. Sometimes this knowledge sharing can be achieved indirectly via an IT-savvy accountant.

A brief history of IT & IT Auditing

As the past fifty years have been addressed in multiple articles in this edition, we have tried to consolidate the main trends in a summary table. We have aligned this summary with the model in the article “Those who know their past, understand their future: Fifty years of information technology: from the basement to the board” elsewhere in this Compact edition.

Several developments passed through different decennia; we have only indicated in which phase the main genesis took place.


How can the Editorial Board further improve Compact?

Compact has survived where other magazines were terminated are just faded-out. For commercial IT magazines it’s challenging to sustain a viable revenue model. So it is recommended to keep Compact free-of-charge and objective, and to emphasize the thoroughness of IT Audit and IT Advisory work based on a foundation of factual findings. That is a real asset in this ever-changing IT domain, where several suppliers promise you a “cloud cuckoo land” and where ISO certifications are skin-deep. Furthermore, it is recommended to include articles written with clients as well as photographs to make it more personal.

More authors could showcase their deep expertise with articles, which also guarantees the inflow of articles and the continuity of Compact. Furthermore, you can leverage the network of all internal and external authors and their constituents to market the expertise of authors. For instance, besides informing C Level, accountants, IT consultants and IT auditors of relevant IT themes, you could also inform a broader group in society. In the past, Compact authors were interviewed for newspapers, TV, industry associations, etc.

About the Editors-in-Chief

Dick Steeman is a retired KPMG IT Audit partner in the Netherlands. Together with Han Urbanus, he established KPMG EDP Auditors and launched Compact. He was the Editor-in-Chief of Compact from 1974 until 1994.

Dries Neisingh is a retired KPMG IT Audit partner in the Netherlands. During his working life he was a Chartered Accountant, a chartered EDP Auditor and professor of auditing reliability and security aspect of IT at the University of Groningen. He was involved with Compact right from the first issue in 1974 and was the Editor-in-Chief from 1994 until 2002.

Hans Donkers used to be a partner at KPMG and is one of the founders of WeDoTrust. He was the Editor-in-Chief of Compact from 2002 until 2015.

Ronald Koorn is an active partner at KPMG in the Netherlands and has been the Editor-in-Chief of Compact since 2015.

Compact editors

Besides the Editors-in-Chief, we also wish to specifically thank the following editors with their Editorial Board tenure of at least ten years:

  • Aad Koedijk
  • Piet Veltman
  • Rob Fijneman
  • Brigitte Beugelaar
  • Deborah Hofland
  • Pieter de Meijer
  • Peter Paul Brouwers
  • Maurice op het Veld
  • Jaap van Beek

And the Compact support staff over the decades: Henk Schaaf (editor), Sylvia Kruk, Gemma van Diemen, Marloes Jansen, Peter ten Hoor (publisher at Uitgeverij kleine Uil and owner of LINE UP boek en media), Annelies Gallagher (editor/translator), Minke Sikkema (editor), Mirjam Kroondijk and Riëtte van Zwol (desktop publishers).

Reporting about culture and behavior increases confidence in outsourcing

Many organizations have outsourced business critical processes to service organizations. Through ISAE 3402 reports, many service organizations report on the quality of their internal control. Internal control includes the combination of so-called hard controls (e.g. authorization, segregation of duties or automated controls) and culture and behavior within the organization (soft controls). Soft controls are an important foundation and precondition for the successful execution of hard controls. The cause of non-effective hard controls usually originates in shortcomings in soft controls. However, reporting on soft controls by service organizations is still very limited. In our experience, this causes tension in the relationship between the outsourcing organization and the service organization. By reporting on soft controls, mutual trust can be strengthened and both organizations are therefore “in control” to a greater extent. In this article, we provide concrete examples of the way in which this can be executed.

Outsourcing does not discharge from responsibility

Organizations are expected to be demonstrably in control of their operations. This also includes activities that they have outsourced to other parties (so-called service organizations). However, how often do you read about data breaches, frauds, or other incidents in the chain of operations? Despite internal (hard) controls that are effective in design, errors and incidents still occur. Evaluations often reveal that this is caused by poor quality of soft controls in the outsourcing organization or the service organization. This advocates that parties have to pay more attention to soft controls.

Accountability for outsourcing especially touches the hard controls

It is customary that service organizations report on the quality of their internal control through International Standard for Assurance Engagements 3402 reports (hereinafter “ISAE3420 reports”).1 In these reports the internal controls of the organization are audited by an independent auditor.

However, in most ISAE 3402 reports of service organizations, the focus is on hard controls. Little to no attention is paid to human behavior, which is a prerequisite for effective hard controls. This is remarkable and is the reason why we publish our vision on reporting on soft controls in ISAE 3402 reports.

In the meantime, the importance of culture and behavior is underlined in the guidance for Dutch auditors, Practice Note 1148 ([NBA22]) of the Royal Netherlands Institute of Chartered Accountants (NBA). This guideline calls for more attention to culture and behavior in audits of financial statements and other engagements of the auditor where the effectiveness of the internal control environment is relevant. As service organizations report on their internal controls to their users in ISAE reports, we are advocating for more explicit attention on culture and behavior in these reports to increase the value of these reports for their users. This is in line with the requirement in ISAE 3402 in which attention is requested for (other) aspects of the control environment of the service organization ([IFAC11]2). The question is: in what way can culture and behavior be included in an ISAE 3402 report?

In order to specify and measure culture and behavior, the generally accepted soft controls model, developed by Muel Kaptein ([Kapt14]; [Kapt03]) can be used. This model is also used by the NBA as basic assumption in its guideline. Before we address the question on how culture and behavior can be included in ISAE reports, we will explain the soft controls model in the next paragraph.

“Sound insight in soft controls together with the analysis of hard controls provides a more complete representation of the internal control environment and the possible effectiveness of the internal controls taken” ([NBA22])

Structured model for analyzing and measuring soft controls

In Kaptein’s model, soft controls are defined as eight non-tangible behavioral factors in an organization, that are of importance for realizing its organizational objectives. In Figure 1 this model is represented with an explanation of the eight soft controls.


Figure 1. Soft controls model (Muel Kaptein). [Click on the image for a larger image]

We consider effective soft controls as the foundation and precondition for the effectiveness of hard controls. It is not about replacing hard controls by soft controls but increasing insight in the effectiveness of the internal control environment by including an evaluation on soft controls.

Effective soft controls contribute to sound internal control

Admittedly, soft controls are less easy to measure and qualify, but ultimately, they do affect the effectiveness of internal controls. Below some illustrative examples.

Examples of the relationship between hard and soft controls

Example 1

An authorized signatory’s task is to approve payments on a daily basis. In addition to that, this person has multiple daily tasks and responsibilities. Usually, the payments’ approval takes place at the end of the day. In extreme cases, it occurs that this employee is approving payments at 11 o’clock at night.

Therefore, there is a less effective soft control: achievability is too low. Chances are that the quality of the approval of the payments is negatively influenced by a lack of time. Despite the fact that hard control seems to be working, the likelihood of wrongful payments is increasing.

Example 2

A process description of steps that need to be taken by the employee could work properly to increase clarity and achievability. However, in practice, such a process description could be seen as inconsistent and could be experienced as insufficiently clear which could cause that the procedure in the process is not being carried out in the same way by every employee.

In Figure 2 we represent the relationship between hard and soft controls schematically. In this figure, soft controls are also shown as the foundation for effective hard controls. The soft controls instruments in the figure are examples of means that can be used to make the soft controls more effective.


Figure 2. Relationship between hard and soft controls. [Click on the image for a larger image]

By addressing soft controls in the execution of – and reporting on – internal control, a service organization could provide a more complete view of its control environment to the clients of their services. This contributes to the preventive side of controls, by providing the right circumstances for the execution of hard controls. In addition to that, it also contributes to a detective side as attention is being paid to findings in the context in which observations are being made. A few examples:

  • Are mistakes allowed to be made?
  • To what extent does one learn from incidents that occur?
  • To what extent are employees enabled to properly do their job, which reduces the chance of mistakes?

The soft controls model is also highly effective to analyze the “why” of findings and to identify the root cause. Identifying the root cause leads to more sustainable improvements and with that to a structurally better internal control environment. At the same time, this contributes to more trust between the service organization and its client. Both are demonstrably better “in control”.3

Suggestions for soft controls in an ISAE 3402 report

When an organization identifies risks, (hard) internal controls are implemented to mitigate these risks. In an ISAE 3402 report, these controls are subsequently tested for their operating effectiveness. An ISAE 3402 report roughly consists of two parts: the description of the service organization and the chapter with the control objectives, the related controls, and the findings of the auditor as a result of the test work. Next, we will describe some suggestions on how soft controls can be included in these two parts of an ISAE report.

Soft controls in the description of the service organization

In the description of the service organization, we see, in practice, that only a few soft controls instruments are being addressed (such as a code of conduct, personnel policy, regulations concerning integrity or whistleblower protection schemes). In our view, as an organization, you would not only detail the set of soft control instruments being used for managing culture and behavior, but rather the quality and effectiveness of these tools. This could be done by for example detailing the results of an annual employee’s satisfaction survey. It’s not about the survey itself but rather the fact that the greater part of the employees experience unclarity in what is expected of them in their job. Such a result may provide context to findings on the internal controls.

As far as we are concerned, a good example of how to reflect on instruments is Nationale Nederlanden’s 2021 annual report. On page 54, for example, it is clearly described how Nationale Nederlanden monitors their values and standards and how they keep these alive: “Living our Values programme” ([NN22]).

As already mentioned before, soft controls are also highly effective for performing root cause analyses. For example, the description of service organizations may explain the extent to which root cause analyses are performed, including a description of what this analysis includes, what the results are, and what actions have been taken. An example of such a description is:

To gain more insight in our culture, the organization annually performs a root cause analysis on incidents of which the size of the consequences is important to the organization and on findings that involve multiple organizational units. The root cause analyses are being executed by a team that has been specifically trained for this and are mainly executed based on interviews with the people involved.

The analyses of the past year show that especially the quality of the soft controls clarity and open for discussion require attention: Several employees had insufficient clarity about their role and responsibilities and were not comfortable enough to ask questions about it. In response to this, the organization communicated and defined tasks and responsibilities more clearly. In addition to that, several intervision sessions were arranged to increase the openness to discuss.

Soft controls in the chapter on control objectives

Also in the second part of the report, the chapter on control objectives, soft controls can be integrated. It is common practice in this chapter to provide management comments to identified findings by the auditor, often as an appendix to the report.

In practice, we often see management responses about the hard component of a control, for example: “We acknowledge the findings of the auditor and adjusted the procedure for the coming year.” The positive aspect is that the finding is acknowledged by the management and that action has been taken. The comment however does not contain a reflection on the context of the finding such as an answer to the questions why there was a finding and what the organization has learned from this. An example of a stronger response in our opinion could be:

“We acknowledge that this control has not been effective. Immediately after observing this, we have investigated the cause. The objective of the control and execution of this control were insufficiently clear to the owner of the control, resulting in the control not being performed correctly in August. In the initial development of the policy, no coordination had taken place with employees who had to execute the policy. The language of the policy therefore did not connect to the experience of the control owner. We have simplified the work process for this control, and we have discussed this with the employees in the unit. For establishing new policies, coordination with control owners has now been included in the process. Therefore, we expect the control can be executed effectively next year.

A further step with soft controls as part of the control framework

Through the last example, the question arises whether soft controls can also be part of the internal controls in the control framework on which the auditor performs his test work. In our view this could be done in two ways:

  • First you can connect specific behavioral risks to a control objective. From this behavioral risk it can be indicated in which way soft controls support the effectiveness of the hard control.
    An example: a hard control that describes that a manager needs to approve an invoice, is only effective if that manager actually determines that the invoice complies to all conditions with the proper knowledge. The manager needs to feel responsible for this; it needs to be clear what is expected of the manager and the manager needs to have sufficient time for this.
    As far as we are concerned, the trouble here is the demonstrability of soft controls. An ISAE 3402 engagement aims to provide reasonable assurance on the effectiveness of internal controls. This requires high-quality documentation and controls. In most organizations, the demonstrability of soft controls is not as mature to provide reasonable assurance. In addition to that it also is difficult to demonstrate whether the soft controls were effective for a specific period.
  • A second way to include soft controls in the control framework, which is easier to apply, is by including hard controls on soft control instruments. Consider, for example, conducting an annual risk awareness survey, where you measure the extent to which soft controls support or impede effective risk management together with control owners, and then formulate action items accordingly.

Getting started with soft controls

Above, we illustrated how a service organization could start reporting on soft controls in its ISAE 3402 report: the description of soft controls, including soft controls in management responses and the execution of root cause analyses or soft controls as part of the control framework.

In order to be able to structurally report on soft controls in the internal control environment and actively work on this as an organization, the following step-by-step plan could be followed:

  • Soft controls start with management. The first step is getting support from the service organization’s management: is management open to this? Are they prepared to be vulnerable?
  • The gradual training of the organization in the area of soft controls: This ensures that people within the organization speak the same language when it comes to culture and behavior. A sound route for this is to start with the control owners, compliance, and internal audit. Thereafter it can spread to the rest of the organization.
  • Setting up a model for analyzing root causes. As stated above, a sound root cause analysis leads to a better internal control environment; however, this does need a robust method and process.
  • The execution of soft control surveys: for example, by executing a survey as mentioned before (through questionnaires) amongst employees that execute controls. Also include the reflection of the organization on these results in the ISAE 3402 report.


If an organization outsources critical processes to a service organization, this organization is not discharged of the responsibility of being in control of these processes. As this responsibility is largely supported with ISAE 3402 reports, these reports should give a complete view of the quality of the internal control environment at the service organization. We are of the opinion that, without attention for soft controls, fundamental insights are missing.

As we illustrated in this article, you cannot be sure about the quality of the hard controls without paying attention to soft controls. This is why we plead to also report on soft controls, which increases trust between organizations, and helps the outsourcing organization as well as the service organization itself to be “in control” to a greater extent.


  1. Or other Service Organization Control (SOC) reports on internal controls like ISAE 3000, SOC2/SOC3.
  2. Standard 3402, article 16 sub viii ([IFAC11]): “Other aspects of the service organization’s control environment, risk assessment process, information system (including the related business processes) and communication, control activities and monitoring controls that are relevant to the services provided.”
  3. For additional examples about the relations between Soft Controls and IT General Controls see [Bast15].


[Bast15] Basten A.R.J., Van Bekkum, E. & Kuilman, S.A. (2015). Soft controls: IT General Controls 2.0. Compact, 2015(1). Retrieved from:

[Kapt03] Kaptein, M. & Kerklaan, V. (2003). Controlling the ‘soft controls’. Management Control & Accounting, 7(6), 8-13.

[Kapt14] Kaptein, M. & Vink, H.J. (2014, January 13). The Soft Side of Hard Controls: A Control Coding Theory. Retrieved from:

[IFAC11] IFAC (2011). International Standard on Assurance Engagements (ISAE) 3402: Assurance reports on controls at service organizations (article 16 sub viii). Retrieved from:

[NBA22] NBA (The Royal Netherlands Institute of Chartered Accountants) (2022, February 8). NBA Practice Note 1148: Obtaining an understanding of soft controls relating to an audit of financial statements – Impact of culture and behaviour on the risk assessment. Retrieved from:–en-regelgeving/nba-handreikingen/1148/english-translation-of-nba-practice-note-1148-soft-controls-febr.pdf

[NN22] NN Group (2022, March 10). Serving customers in times of change: 2021 Annual Report (p. 54). Retrieved from:

Compliance Assurance: How do you stay in control of your compliance processes?

In current society, organizations increasingly need commitment to comply with relevant laws, regulatory requirements, industry codes and organizational standards, as well as standards for good governance, generally accepted best practices, ethics and community expectations. Organizations that aim to be successful in the long term have to establish and maintain a culture of compliance, considering the needs, requirements and expectations of different stakeholders. A structured compliance management approach enables an organization to create such a culture of compliance. 

This raises several questions such as: What raises awareness of compliance topics? How is compliance currently anchored in your organization? How can you become familiar with relevant and new requirements? And how do you move your company to the next level of compliance and provide your stakeholders with independent assurance over your compliance function? In this article, we would like to show how you can move your compliance processes to the next level.


To establish a successful compliance organization, people with the right mindset and skills throughout the whole organization are needed to drive the highest possible efficiency to compete in the market. We will first introduce the importance of compliance, including the key drivers in the market, after which we will focus on the insights from KPMG’s Chief Ethics and Compliance officers (CCO) survey, published in March 2022. After that, we will elaborate on KPMG’s Global Compliance Framework and explain how it can help your internal organization and externally demonstrate the effectiveness of your compliance management system (CMS). We would like to close with a description of how the ISO 37301 framework can help you to achieve a higher maturity level on compliance.

Importance of compliance

Compliance has become increasingly important in various areas in recent years. Continuously meeting legal requirements is an organizational challenge as overseeing and acting on all requirements is complex and costly. However, an approach to show internal and external parties that your organization is in control of compliance requirements is more important than ever, given the external challenges from society to be compliant and the competitive advantage that an organization can achieve with a good compliance system. Also, the need to provide a solid foundation which enables prevention, detection and monitoring prevention of non-compliance is more important than ever.

Several compliance incidents during the last years showed that compliance is too important to ignore. Therefore, a stronger focus on this topic is advisable. The following aspects are the key drivers to increase focus on compliance and demonstrate its importance:

  • Global public attention and demand for compliance. A public attention and demand for conducting business in an ethical way is present globally. This triggers a search for independent assurance and certification on CMSs and environmental, social and governance (hereinafter: ESG) aspects. New regulations driving focus on human rights and adherence to social standards as well as the global sustainable development goals that include compliance concepts have the public’s attention.
  • Concentration on compliance effectiveness. Regulatory focus has expanded to maturity of compliance programs (e.g. De Nederlandsche Bank / Autoriteit Financiële Markten / Department of Justice) to improve on robustness of e.g. the monetary system.
  • Legal requirements for companies. Corporate criminal liabilities, legislation requirements (e.g. Money Laundering and Terrorism Financing Act, US Foreign Corrupt Practices Act, UK Bribery Act) as well as the oversight of and implications for the misconduct of third parties in supply chain increasingly become a focus area.
  • Competitive advantage. Being in control of compliance requirements can distinguish an organization from their competition. For example, a well-functioning CMS in your organization can create synergies between tasks and help to overcome departmental boundaries and meaningfully interlink different processes in the organization’s workflow. Furthermore, it can show a tangible display of enhanced capabilities for a firm’s reputation.

Results from KPMG compliance CCO survey

KPMG performed a survey ([Groe22]) on compliance to gain insights in terms of the current importance of compliance. The survey includes responses from 100 Chief Ethics and Compliance Officers (CCOs), who represent some of the largest organizations across multiple industries in the Netherlands as well as acting international, about their view on the most important focus and integration areas, as well as key areas where compliance programs can improve. This survey in the Netherlands follows the same format as the KPMG US CCO Survey, last published in 2021— the most recent version focusing on risk and investment in compliance ([Mats21]). Both surveys utilize the KPMG Compliance Maturity Framework as a basis.

The organizations that participated in the survey operate in the following industries: consumer markets/retail, energy, financial services including banking, capital markets and insurance FS (Banking and/or Insurance), healthcare and life sciences, industrial manufacturing, and technology, media and telecommunications. The outcome of the survey presents the results of the survey as well as KPMG’s assessment of the most important factors and measures companies can consider bringing their compliance programs to the desired level of maturity.

With the CCO survey, the current opinion about compliance is gathered. It is shown that a majority of the organizations seek to obtain CMS certifications in the near future (66%). This shows a strong commitment to be compliant and to account for that externally. Moreover, most mature areas in the field of compliance are the areas traditionally associated with regulatory obligations (e.g. regulatory change management and reporting). Furthermore, as key conclusions of the CCO survey, the following was noticed:

  • Training. As a regulatory obligation, a training is still focused on completion rates and not being used as a tool for shaping a culture which is an emerging regulation in some industries (e.g. DNB soft controls). Therefore, organizations should shift their training and communication approach to view it as a tool to shape and assess their culture and conduct.
  • Driving process efficiencies. Adequate compliance resources and digitalization / analytic support are still lacking across industries. Furthermore, organizations can do more to develop controls and indicators for emerging risks, such as ESG and third-party oversight.
  • Integrated collaboration. More effort should be made across organizations to integrate their ethics and compliance performance metrics and indicators into wider governance, risk, and compliance (GRC) frameworks for more optimal monitoring and reporting capabilities.

Expectations of compliance professionals’ knowledge and responsibilities will increase as will the need for adequate resources to match the company’s ethics and compliance framework. Increasing societal pressure for strong and far-reaching corporate governance will continue to drive change in the regulatory obligations and maturity requirements. The status quo and future development of the ethics and compliance field can be summarized by the CCO survey across nine different areas: culture; governance; process; risk assessments; policies, procedures and code of conduct; training and communications; monitoring and testing; investigations; and reporting. The expected changes are already visible today (see Figure 1).


Figure 1. The ethics and compliance path. [Click on the image for a larger image]

A Global Compliance Framework

The CCO survey showed that organizations are still struggling with establishing a solid compliance framework. Therefore, and in order to achieve organizational goals, having a solid and sound compliance program in place is crucial. A combination of hard controls such as the presence of the compliance function, policies and procedures, and soft controls such as trainings and e-learnings are necessary to successfully mitigate compliance risks. In order to do so, it is crucial that the compliance function, policies and procedures align with the needs and wishes that are coming from the organization, as well as with the necessary requirements from the regulators.

To measure the effectiveness of the compliance function, policies and procedures in place, KPMG developed the Global Compliance Framework (see Figure 2). This framework has been calibrated against applicable industry standards and regulatory expectations, requirements and guidance.


Figure 2. KPMG’s Global Compliance Framework. [Click on the image for a larger image]

The Global Compliance Framework that tailors to your organization’s needs to prevent, detect and respond appropriately to non-compliance with regulatory and contractual requirements can support you to become trusted with new requirements regarding compliance. KPMG’s Global Compliance Framework can support organizations in getting a better overview and understanding of the organization’s needs. It reflects an enterprise-wide risk management approach to compliance with focus on governance, policies and procedures, people (both compliance-dedicated staff and other staff) and monitoring arrangement (“three lines-of-responsibilities”).

Demonstrating compliance: ISO 37301 certification

ISO standard certifications can help your organization to show its capability to maintain an effective CMS to a broad public, industry regulators, and your current and future clients. In particular, ISO 37301 certification, as a guideline for implementation of a CMS, is independent of the size, type and nature of the activity, as well as whether the organization belongs to the public or private sector. In this context, an ISO 37301 certification can help increase the effectiveness and optimize compliance-relevant processes. Unlike other ISO standards, the ISO 37301 standard sets the bar for an effective CMS as opposed to testing “the way of working”. For organizations, an independent certification provides transparency to shareholders, clients and regulatory oversight on the appropriate working of the CMS. At the same time, it mitigates the liability risk of the organization’s management.

KPMG supports organizations in designing, assessing and improving an organization-wide compliance system, discovering the value of compliance while fully supporting your organizational goals ([KPMG22]). An independent certification can provide transparency to shareholders and clients and regulatory oversight on the appropriate working of the CMS and mitigate the liability risk of the entity’s management. As mentioned above, this is relevant given that 66% of the respondents of the compliance survey ([Groe22]) expect to obtain CMS certifications in the near future. CMS certifications such as the ISO 37301 certification can provide organizations with more assurance in this regard, where the KPMG’s Global Compliance Framework can be referenced from the outset to gain a better understanding of the organization and its context.


Regulatory compliance is vital for being a successful and sustainable organization. To achieve this, organizations need to have a mature CMS that takes the needs, requirements and expectations of the different stakeholders into consideration. However, the regulatory environment is continuously changing, making it an even greater challenge to comply with all relevant legal and stakeholder requirements. A framework which shows a flexible approach to organizational needs is therefore necessary. KPMG’s Global Compliance Framework is an example of such a framework. The impact of non-compliance is trivial as it not only endangers public trust but can also result in reputational damage and large fines.

These circumstances call for a solid compliance framework ensuring that an organization can prevent and remain alert to identifying (potential) breaches and follow up on any non-compliant activities. Therefore, the goal of organizations should be to familiarize themselves with their compliance program and achieve a higher maturity level. The authors in KPMG’s Assurance and Advisory services can be your contact persons for further information.


[Groe22] Groen, L. et al. (2022). The state of ethics and Compliance in the Netherlands – 2022 Chief Compliance Officer Survey results. KPMG Netherlands. Retrieved from:

[KPMG22] KPMG (2022). Compliance Services. Retrieved from:

[Mats21] Matsuo, A. et al. (2021). KPMG 2021 CCO Survey – Sharing client perspectives on compliance imperatives. KPMG US. Retrieved from:

Operationalization of Machine Learning models in (audit) innovation projects

Machine Learning (ML) is a powerful technique that has enormous potential in several domains, including audit. But it is difficult to bring ML into the production environment and iteratively improve the ML-powered products. In this article, we describe the background and the current state of ML, the difficulties of bringing ML-powered (audit) innovation projects into production, and the importance of Machine Learning Model Operationalization Management (MLOps) methodology. In addition, we discuss, as an effective use case, our own MLOps journey within our audit innovation department (Digital Assurance and innovation, Daní).


Machine Learning (ML) has been getting more and more attention in modern economy. From the pure academic field of study in the past, ML has become the foundation of many billion-dollar companies such as Netflix (recommendation system), Uber (matching problem) and Prisma (computer vision), proving that Machine Learning has potential not only in academia but in a production environment as well. In the right hands, Machine Learning can help to solve many practical problems.

The rapid development in the field of Machine Learning has also provided new opportunities for innovation in the field of accounting and auditing. Auditors have access to vast amounts of data. They can use these to gather evidence to support their opinion more effectively. Machine Learning techniques are very powerful tools which the auditor can apply to reach his audit objectives ([Hoog19]). Digitalizing audits and audit innovation is an ongoing process. According to the survey Deep Shift: Technology Tipping Points and Societal Impact, which was conducted by the World Economic Forum in 2015, 75% of the 816 executives who participated believe that by 2025, 30% of corporate audit work will be completed by AI/ML ([Oppe21]). Machine Learning has a huge potential to utilize artificial intelligence to learn and provide insights based on auditing data. It is showing its potential in Ratio analysis, Regression analysis, financial statement analysis ([Boer19]). It is still nontrivial to bring Machine Learning-enabled audit innovation solutions into the production environment. 

We, Daní, are committed to the research and development of innovative products for auditing. We embrace innovation and cutting-edge technology, fitting them into the auditing domain and embedding data-driven technology broadly throughout the audit process. In the last few years, we delivered a variety of ML-powered auditing solutions to our clients. In the process of product development and implementation, we have encountered many troubles and realized that continuous iterative product development is very difficult for these ML solutions and products.

In this article, we will first introduce the complexity and difficulties to develop and implement Machine Learning products, the concept of Machine Learning Model Operationalization Management (MLOps) and, subsequently, use a business case to describe the evolution process of MLOps in our department.

Key challenges of Machine Learning in production

Opportunities always come with challenges. Though with the vast array of open-sourced ML frameworks and learning materials, more and more professionals can experiment with ML and build ML models. These provide huge opportunities, although there are challenges to operationalizing ML for business purposes. Nevertheless 86% of businesses have increased their budgets for Machine Learning projects in 2021, according to the enterprise trends in Machine Learning report from Algorithmia ([Oppe21]), and “87% of organizations still struggle with long model deployment timelines and, at 64% of organizations, it takes a month or longer to deploy even a single model” ([Oppe21]).

From the business point of view, any project can only maximize its value in a production environment, and the Machine Learning project is not an exception.

Most organizations greatly underestimate the complexity of productionalizing Machine Learning projects ([Scul15]), or they treat Machine Learning projects in the same way as software development projects. Moving from experiments in academia, which is known as a “sandbox” environment, into the real world – production scale – is nontrivial.

Moreover, a ML project in academia and a ML project in production tend to have different requirements, see Table 1.


Table 1. Requirements for ML in academia and ML in production (adopted from [Huye22]). [Click on the image for a larger image]

As a result, having the ML model source code is not enough to process millions of incoming requests or to answer to stakeholders’ or clients’ questions about model performance over time. 


Figure 1. The ML code itself – e.g. model definition and training scripts, is a tiny component of the overall production ML system (adopted from [Scul15]). [Click on the image for a larger image]

A Machine Learning project is also different from a traditional software project. The most important difference is that in Machine Learning project data is the core of the project, and the code is designed to service the data instead of application behavior. Machine Learning development is more iterative and explorative. Training a model is only a small part of the project, see Figure 1. For instance, according to Sculley et al. ([Scul15]): “only a tiny fraction of the code in many ML systems is actually devoted to learning or prediction”. However, a lot of other components are still required to make that prediction available for the user or to deploy the ML model into a production system that generates business value. Building and initially deploying models is just the beginning of the project. Models in production must constantly be monitored, retrained, and redeployed in response to changing data signals in order to ensure maximum performance. All of this means that ML requires its own unique lifecycle ([Data22]).

Insights into Machine Learning lifecycle

A Machine Learning project lifecycle has an iterative nature. The four major steps of the lifecycle include Scoping, Data, Modeling, and Deployment ([Deep21]). Figure 2 shows these steps and the common underlying activities. In the project scoping phase, we need to define objectives of the project and business fit, identify an opportunity to tangibly improve operations, increase customer satisfaction, or otherwise create value. After the business understanding, we enter the stage of data collection and collation, at which we gather the data, label it and transform it into the correct format as needed. With these data in hand, we can start the modeling phase, define the input and target output of the model, select and train the model, and perform an error analysis to validate the performance, and, optionally, explain the results. In the final phase, we deploy the model into the production environment, i.e. making it available to serve users’ requests, and, subsequently, maintain and monitor the system in order to continue to leverage and improve the model through time.


Figure 2. Machine Learning project life cycle (adopted from [Deep21]). [Click on the image for a larger image]

What is MLOps? 

Within the Machine Learning projects, after a clear business understanding, we first need to process data, use the data to train and develop the Machine Learning model, and then set up the solution in the production environment and monitor the performance. Machine Learning Operations (MLOps) is a methodology designed to help with a successful ML delivery in production. It serves Machine Learning and data science assets as first-class citizens ([Cox20]). The main MLOps components include data management, training of a model, versioning, monitoring, experience tracking etc. ([Vise22]). MLOps methodology covers the full ML project lifecycle end to end in the iterative approach, providing the connection between all the components required for success in production ML.

The complete MLOps iterative process includes three broad phases, usually named “Designing the ML-powered application”, “ML Experimentation and Development”, and “ML Operations”, which are usually required for success in production ML ([Vise22]) based on the definition from According to [Vise22], MLOps can be seen as an “Iterative-Incremental process” and usually contains three high-level stages, see Figure 3: ML-featured application “Design, Model Development and Operations”.


Figure 3. An overview of MLOps Iterative-Incremental process (adopted from [Vise22]). [Click on the image for a larger image]

To get a non-biased evaluation of the existing organization’s MLOps environment, Microsoft ([Micr22]) and Google ([Goog22]) define the maturity models, which help clarify the MLOps principles and practices in accordance with the explicit requirements and characteristics per each level (see Table 2). The maturity model can be seen as a metric of maturity of one’s Machine Learning production environment as well as the processes associated with it ([Micr22]); it also reflects the velocity of training new models given new data or training new models given new implementations ([Goog22]). For example, the MLOps level 0 process ([Goog22]) contains a list of disconnected manual steps, which leads to difficulties with releases and quick model degradation in the real world. In contrast, MLOps level 2 (level 4 according to Microsoft classification) is the most advanced process, which includes continuous pipeline integration, automated model training and testing, and automatic model releases. 


Table 2. MLOps maturity level overview (based on [Micr22] and [Goog22]). [Click on the image for a larger image]

How can MLOps help with innovation? 

Machine Learning can make a significant difference in several domains because it makes it possible to learn from data and initiate for example next steps in an automated way. In the remaining part of this article, we focus on audit as a use case to make this more tangible. However, keep in mind that this could also apply to your industry.

Machine Learning has the potential to make audit procedures more efficient and effective but due to the uniqueness of the audit business itself, new Machine Learning auditing procedures require rigorous validation, explanation, and regulation. In other words, running a Machine Learning project or service in the audit domain has its own specific requirements.

Although one should consider MLOps principles equally important in general use cases, we can highlight a few crucial for the audit domain. First of all, corporate and government regulations can require a complete provenance of the model deployed in production. For example, it includes but is not limited to information, such as how the model was built, what data was used, and what parameters and configurations were used, which falls under the functionality of the Versioning concept of MLOps. Model Monitoring is another important topic to pay attention to. For all models deployed in production, i.e. accessible for use, one must be sure that they perform as expected. Moreover, all deviations from the original expectation should be quickly detected. Unfortunately, “the one constant in the world is change” ([Deep21]). Therefore, one also needs to be aware of model degradation and be able to react quickly to deliver the original model prediction quality over a wide time frame. Therefore, by real-time monitoring of incoming requests and model inferences, we can assure that required adjustments are applied in time.

Within the audit domain we have strict regulation requirements for Machine Learning-powered auditing services. We should detect and monitor the risk of ML/AI based on its Quality, Integrity, Explainability, and Agility. We can combine the four steps of the lifecycle, Scoping, Data, Modeling, and Deployment, with the requirements of Quality, Integrity, Explainability, and Agility. For example, we need to answer the following simplified key questions in the different phases.

Scoping phase:

  1. What does the model do?
  2. Is the model compliant with the compliance requirements in the specific (audit) domain?

Modeling phase:

  1. Does the model do what it needs to do?

Deployment phase:

  1. Do the models keep doing what they need to do?

As MLOps covers the full ML project lifecycle end to end and helps achieve project goals with iterative approach. For all these questions, we can get the answers with the help of MLOps in a systematic and sustainable way. The MLOps can therefore help with the efficient delivery of the AI/ML solutions in the audit innovation domain, and perform procedures under control, increase transparency in all ML-based projects and gain the trust of our clients. 

What is the MLOps journey within Daní? 

Academic discussions are great, but putting theoretical knowledge into practice in the business field seems even more important, doesn’t it? We work at the Digital Assurance and innovation department (Daní) at KPMG NL, which is the audit innovation department. We are developing ML/AI powered data-driven solutions and tools to innovate and digitalize auditors’ daily work. This is beneficial to our professionals and end-clients as it makes the audit more effective and efficient. In addition, we unlock new insights that enrich the audit service. In this section, we share the story of integrating MLOps concepts into our way of working. As we have mentioned, modern Machine Learning open-source frameworks can help us build models with a few lines of code. Three years ago, we started the Machine Learning experiments with the local Jupyter Notebook training model on our laptop, using Excel sheets to keep and track the results of our experiments, and deploying a standalone web service for each model to be used in production. With this workflow, we met some problems that were hard to tackle during the project development. First of all, data processing and model training had been limited by the power of our laptops, resulting in the fact that iterative process of model improvement could take days and even months. Secondly, it was hard to keep in sync experiment results between team members and traced back the saved model. Finally, making the saved model available for the end user had included a lot of non-ML codes such as a web application code or cloud configuration, and it was hard to maintain. An attentive reader can conclude this was a zero-maturity level for MLOps.

Jupyter Notebook: a popular local development and debugging tool for Machine Learning experience.

TFX: a TFX pipeline is a sequence of components that implement an ML pipeline that is specifically designed for scalable, high-performance Machine Learning tasks. 

Kubernetes: an open-source container orchestration system for automating software deployment, scaling, and management. 

To improve our workflow, decrease time to market and avoid a lot of extra code we had for each ML-powered solution, our team decided to make a move towards MLOps methodology. Our MLOps journey started at the beginning of 2021. At that time, we started to work on the “sesame” solution – the NLP (Natural Language Processing) tool to process financial reports and highlight the sentiments (negative, positive, or neutral) of such reports. We were focusing on the Automation and Reproducibility principles of MLOps with the development of the project. It was our first project where we delivered an end-to-end Machine Learning pipeline, starting from data ingestion, processing incoming data, training the Machine Learning model and evaluating it afterward based on our evaluation requirements, and delivering the trained model to the specified location. Our technical stack included: TensorFlow Extended (TFX) – “an end-to-end platform for deploying production ML pipelines” ([Tens22]), and Kubeflow – “the Machine Learning Toolkit for Kubernetes” ([Kube22]). With TFX we were building our ML pipeline, while Kubeflow was used as an orchestration tool, helping to use cloud recourses efficiently.


Figure 4. Sesame end-to-end pipeline, visualization from Kubeflow Dashboard. [Click on the image for a larger image]

The sesame project was challenging for us, since the TFX framework was still under active development (the public available release is in alpha stage), many underlaying components could have significant changes in the future and documentation was quite limited. Another challenge for us was that TFX was initially developed within Google and mainly focused on the Google Cloud Platform while we were utilizing Azure Cloud. For example, since the start of our work, the data ingestion process has changed, and simpler out-of-the-box options and formats have been added. The lack of complete documentation led us to multiple trials and attempts to build a robust and clear Train component logic. Fortunately, after that non-trivial but exciting work, the TFX pipeline for “sesame” project came to a success: However, we have not covered all MLOps principles in the sesame project. For example, an initial data exploration was done with local Jupyter Notebooks, model deployment contained a few manual steps such as moving model to the cloud storage location and creating model deployment. That is why we have not stopped and still continue on our MLOps integration journey.


Figure 5. Kubeflow Central Dashboard, main page. [Click on the image for a larger image]


Figure 6. Main components and their functions in Kubeflow (adopted from [Kube22]). [Click on the image for a larger image]

Within Daní we already noticed a positive difference by using Kubeflow. For example, a few years ago, we spent four days on delivering a single model trained inside Jupyter Notebook into production environment. One day was usually spent on wrapping that model with cloud-related configuration, another day was spent on alignment with the application development team about model communication protocol, and finally the third day was spent on the actual deployment. So, for a single instance of modeling it took four days on average. And in case of tiny changes in the input data, model architecture or specific client requirements, that process had to be repeated. However, following MLOps principles and utilizing the power of Kubeflow can decrease that delivery time from days to a few hours.

Up to now, we have decreased the time for the model’s iteration step – to try new model parameters or new input data and deliver an updated model in the production environment – in our ML-powered solutions from days to hours, adding a lot of business value, since the time to market of validated ML-powered solutions is decreased significantly.

In the future, with the help of Kubeflow, within Daní we will be able to set up hundreds of rigorous validation experiments for Machine Learning auditing procedures simultaneously, which can help us speed up the model training time and shorten the time to market. As described in the MLOps maturity models, the higher the maturity level, the more components in the Machine Learning lifecycle are automated, manageable, and traceable.   

By increasing the level of automation of our MLOps platform, we can get close to the real-time model training based on newly added data, smooth and robust model deployment afterwards. Finally, production grade monitoring as well as governance of models used by our clients add trust to Daní’s ML-powered solutions. In Table 3, we explicitly highlight the important properties of ML projects and added value for the Daní team and our users. These insights can help you to see the benefits as well and determine your own business case in other domains.


Table 3. Added value of using MLOps at Daní team. [Click on the image for a larger image]


The potential of Machine Learning projects is enormous, and these projects are an integral part of audit innovation. However, as a cutting-edge technology, it is still difficult to implement Machine Learning projects in a production environment. Hopefully, with MLOps, we can accelerate the process, shorten the time to market, and realize the vision of efficient ML project management, while ensuring performance, quality and trust.


We want to express our appreciation to Marcel Boersma for all the help and support in writing of this article.


[Boer19] Boersma, M. (2019). Network theory in audit. Compact 2019/4. Retrieved from:

[Data22] DataRobot (2022). Machine Learning Life Cycle. Retrieved from:,to%20derive%20practical%20business%20value.

[Deep21] DeepLearning.AI (2021, October). Steps of an ML project. Retrieved from:

[Cox20] Cox, T., Neale, M., Marck, K., Hellström, L., & Rimoto, A. (2020, September 25). MLOps Roadmap 2020. Github. Retrieved from:

[Goog22] Google (2022). MLOps: Continuous delivery and automation pipelines in machine learning. Retrieved from:

[Hoog19] Hoogduin, L. (2019). Using machine learning in a financial statement audit. Compact 2019/4. Retrieved from:

[Huye22] Huyen, C. (2022). Designing Machine Learning Systems. O’Reilly. Retrieved from:

[Kube22] Kubeflow (2022). Kubeflow: The Machine Learning Toolkit for Kubernetes. Retrieved from:

[Micr22] Microsoft (2022). Machine Learning operations maturity model. Retrieved from:

[Oppe21] Oppenheimer, D. (2021). 2021 enterprise trends in machine learning. Algorithmia. Retrieved from:

[Scul15] Sculley, D., Holt, G., Golovin, D., Davydov, E., Phillips, T., Ebner, D., … & Dennison, D. (2015). Hidden technical debt in machine learning systems. In C. Cortes, N. Lawrence, D. Lee, M. Sugiyama & R. Garnett (eds.), Advances in Neural Information Processing Systems 28.

[Tens22] TensorFlow (2022). TensorFlow Extended (TFX) is an end-to-end platform for deploying production ML pipelines. Retrieved from:

[Vise22] Visengeriyeva, L., Kammer, A., Bär, I., Kniesz, A., & Plöd, M. (2022). MLOps Principles. Ml-Ops.Org. Retrieved from:

Leveraging technology within Internal Audit Functions

Keeping pace with the business is the trait demanded from Internal Audit Functions (IAFs) today. As organizations continue to evolve and adopt more advanced technology into their operations, the internal auditors’ mandate remains unchanged. To continue adding value to their organization, IAFs are encouraged to embrace the benefits of technology.


Organizations transform, at ever-increasing speeds, and new risks continue to emerge. To continue adding value to their organization, Internal Audit Functions (IAFs) are encouraged to embrace the benefits of technology and data analytics. In this article, we provide a perspective of the future of internal audit, a “technology-enabled internal audit.” We will delve into how a leading IAF could implement technology as part of the internal audit methodology by considering the growth in three base aspects: Positioning, People and Process. Technology will create higher efficiencies, improve effectiveness, identify deeper insights, strengthen data governance and security, and enable IAFs to identify and focus on high priority value-adding activities. Moreover, inspire the trust of their stakeholders, creating a platform for responsible growth, bold innovation and sustainable advances in performance and efficiency of an organization as well as improve IAF attractiveness for students and other new hires.

This article is divided into two parts; the first part will provide background information on the actual and relevant topics in technology for IAFs to understand, identify and leverage the technology, data analytics and their organization’s digital landscape. A roadmap for a leading IAF to grow to be more technology-enabled is discussed in Part II where the basic aspects, Position, People and Process are discussed as viewpoints with a case study of how a pension fund administrator mapped their way to leverage technology.

Part I: Key concepts relevant for internal audit

Many organizations are investing in advanced technologies, such as algorithms and artificial intelligence, predictive analytics, Robotic Process Automation, cognitive systems, sensor integration, drones, and machine learning to automate, labor-intensive knowledge work. Leveraging these technologies is not a matter of keeping up with trends for IAFs. Rather, it is a means to continue adding value to organizations and to meet the expectations of an ever-transforming business environment. IAFs should mirror the evolution of the advanced technologies that organizations are implementing. Figure 1 shows a multilayer mapping for a technology enabled internal audit.


Figure 1. Technology-enabled internal audit multi-layer mapping. [Click on the image for a larger image]

The expanding landscape of technologies is large and multifaceted but can be broken down into four primary categories that lie on a spectrum from simplest to most complex. Next, we will address the following four categories of technologies that can be leveraged by IAFs:

  1. Data analytics & business intelligence
  2. Process mining
  3. Robotic Process Automation (RPA) & intelligent automation
  4. Cognitive technology
  5. Emerging technologies

1. Data analytics & business intelligence

Data analytics is the science and practice concerned with collecting, processing, interpreting and visualizing data to gain insight and draw conclusions. IAFs can use both structured and unstructured data, from both internal and external sources. Data analytics can be historical, real-time, predictive, risk-focused, or performance-focused (e.g., increased sales, decreased costs, improved profitability). Data analytics frequently provide the “how?” and “why?” answers to the initial “what?” questions often found in the information initially extracted from the data.

IAFs have traditionally focused on transactional analytics, applying selected business rules-based filters in key risk areas, such as direct G/L postings, revenue, or procurement, thereby identifying exceptions in the population data. Leading IAFs are realizing the added value of leveraging business intelligence-based tools and techniques to perform “macro-level” analytics to identify broader patterns and trends of risk and, if necessary, apply more traditional “micro-level” analytics to evaluate the magnitude of items identified through the “macro-level” analytics. Data analytics in internal audit involves (re-)evaluating and, where necessary, modifying the internal audit methodology, to create a strategic approach to implement, sustain, and expand data analytics-enabled auditing techniques and other related initiatives such as continuous auditing, continuous monitoring, and even continuous assurance. See Figure 2.


Figure 2. Journey towards continuous auditing. [Click on the image for a larger image]

The journey from limited IT assurance to continuous auditing – for an IAF involved in financial audits – is visualized above. The IAF will be able to shift its focus from routine transactions to non-routine and more judgmental transactions. At the same time, more of the work performed is being automated. In this journey, the IAF mirrors the developments of the organization itself to optimize the usage of technologies being implemented.

2. Process mining

A fast-growing and value-adding tool is process mining software. Process mining provides new ways to utilize the abundance of information about events that occur in processes. These events such as “create order” or “approve loan” can be collected from the underlying information systems supporting a business process or sensors of a machine that performs an operation or a combination of both. We refer to this as “event data”. Event data enable new forms of analysis, facilitating process improvement and process compliance. Process mining provides a novel set of tools to discover the real process execution, to detect deviations from the designated process, and to analyze bottlenecks and waste.

It can be applied for various processes and internal audits such as purchase-to-pay, order-to-cash, hire-to-retire, and IT management processes. The use of process mining tools to analyze business processes provides a greater insight into the effectiveness of the controls, while significantly reducing audit costs, resources, and time.

3. Robotic Process Automation (RPA) & intelligent automation

RPA is a productivity tool that automates manual and routine activities that follow clear-cut rules by configuring scripts and “bots” to activate specific keystrokes and interface interactions in an automated manner. The result is that the bots can be used to automate selected tasks and transaction steps within a process, such as comparing records and processing transactions. These may include manipulating data, passing data to and from unlinked applications, triggering responses, or executing transactions. RPA consists of software and app-based tools like rules engines, workflow, and screen scraping.

4. Cognitive technology

Cognitive technologies refer to a class of technology, which can absorb information, reason, and think in ways similar to humans. For years, this has been on the uptrend across all industry areas. Organizations are already embarking on implementing cognitive technologies in their key business processes to improve process execution – and with this new reliance on technology, new risks arise on which IAFs must perform audits.

Today’s intelligent automation innovations have the transformational potential to increase the speed, operational efficiency, cost effectiveness, of the IAF’s internal processes, and to empower internal audit professionals to generate more impactful insights, enabling smarter decisions more quickly. Whether or not an IAF chooses to leverage intelligent automation technologies themselves, they are likely part of an organization which requires them to partake in it, giving rise to need for the technology-enabled Internal Audit Function.

Using the data available and adequate understanding of intelligent automation are pre-requisite skills for performing audits and using cognitive technologies. As IAFs further mature in their use of automation tools, they will become better positioned to harness value for their organization.

We conclude with an overview of advantages and opportunities for IAFs to leverage using these. See Figure 3.


Figure 3. Advantages of technology for internal audit. [Click on the image for a larger image]

5. Emerging technologies

Emerging technologies refers to numerous technology relevant for IAF, either as an audit object, or as means to improve the audit processes itself. We have identified the following set of technologies which are relevant and emerging for IAFs.

Algorithms / artificial intelligence (AI)

A broad and comprehensive algorithms and AI-related risk assessment process is essential for data-driven organizations that want to be in control. The question for IAFs is how to organize this risk assessment process. One auditable topic to consider is the organizing accountability for uses of data between data management teams, application development teams, and business users. Another auditable topic is the formation of network arrangements with third parties. An element that is needed for an IAF, is a long list of known AI-related risk factors. And another list of associated controls that can be used to audit those risks from a variety of perspectives within an organization. The first step for an IAF is taking the strategic decision to take a good look at its algorithms and AI-related risks and where they come from. Currently, internal auditors can audit algorithms and provide assurance for AI frameworks.

Machine Learning is a way to teach a computer model what to do by giving it many labelled examples (input data) and let the computer learn from experience, instead of programming the human way of thinking into an explicit step-by-step recipe. Deep Learning is a subfield of Machine Learning, where the algorithms are inspired by the human brain (a biological neural network). We therefore call these algorithms artificial neural networks.

Cloud computing

An architecture that provides easy on-demand access to a pool of shared and configurable computing resources. These resources can be quickly made available and released with minimal management effort or provider interaction. We see that some IAFs prepared key-controls frameworks for the data stored in the cloud and providing assurance over cloud computing.

Internet of Things (IoT)

“The network of devices, vehicles, and home appliances that contain electronics, software, actuators, and connectivity which allows these things to connect, interact and exchange data.” Leading IAFs are using IoT technology for continuous monitoring of maintenance parameters.


In technological terms, are an unmanned aircraft. Essentially, a drone is a flying robot that can be remotely controlled or fly autonomously through software-controlled flight plans in their embedded systems, working in conjunction with onboard sensors and GPS. Or simply, IoT connects physical objects to the digital world and drones enhance the physical observation methodology remotely.

Internal audit conducts independent reviews, exposes (possible) vulnerabilities and risks and points the way to solutions. Leading IAFs are using drones for inventory reviews on remote locations. In this way, IAFs offer organizations assurance and insights on these emerging technologies.

Based on a global KPMG survey ([KPMG21]), we observed that only a few leading IAFs have the expertise and capabilities to perform audits on all these topics or to integrate these technologies within their own operations. A reference framework or a work program is often lacking. For IAFs, it’s not a question of whether there is a need for auditing; it’s a question of when.

In the next section, we provide a roadmap to the technology-enabled internal audit.

Part II: Roadmap towards technology-enabled internal audit

We will discuss the differences and effects of a technology-enabled Internal Audit compared to a more traditional IAF and why Positioning, People and Process are crucial elements for an IAF embedding technology in its methodology to add value and improve operations in the organization.

The Positioning aspect touches on the positioning of IAFs within the organization, its governance, mandate, independence, relationships, and importantly, access to structured and unstructured data. The People aspect looks at the competencies and the skills of those individuals within the internal audit team, or those individuals at the disposal of the internal audit team. Lastly, but most importantly, the Process aspect considers the various tools, options and solutions that allows IAFs to utilize data effectively and successfully as part of its risk-based internal approach and the audit methodology.

To remain relevant in current times, the end goal for IAFs should be to effectively implement the use of technology in its risk-based approach to auditing. Each organization will have a different journey to get to the end goal; however, considering Positioning, People and Process should be the starting point.

Traditional versus tech-enabled IAF

Traditional IAFs established an annual plan and a long-term plan (year or multi-year plan) which is not or hardly being updated based on emerging risks and developments that may arise. The level of assurance of advisory audit is also dependent on the judgmental or statistical sampling work of the audit team and audit findings are based on partial observations.

A technology-enabled IAF moves beyond the traditional approach to a robust and dynamic planning with data-driven feedback loops between the IAF and the Executive Board or Audit Committee which provide greater insight to assist management decision making on process improvement and control effectiveness. The risk analysis is conducted with input from data analytics, resulting in a comprehensive and risk-based audit plan. Technology-enabled IAF provides better assurance and insights based on testing of the entire population. Auditors are freed up to focus on the quality and more strategic parts of the audit.


Positioning refers to whether the IAF is sufficiently structured and well placed (reporting lines within the organizational structure) to enable it to contribute to business performance. In this context, positioning refers to having suitable mandated access to data and the business and the respect of the other departments across the organization.

This would suffice for a traditional IAF, however, organizations should consider a strategy to implement, sustain and expand the use of technology in their internal audits. More importantly, they should consider the added value derived from the use of technologies to derive insights from vast volumes of information, drawn from across the organization and external sources.

Successful IAFs of the future will be positioned in such a way that they will leverage technology to add value to management and the board. This requires transforming the way IAFs plan, execute, report audits, and manage stakeholder relationships.

Positioning a technology- enabled IAF is key within an organization, and not just the use of technology in audits, but also effectively making use of data, existing infrastructure, and the technical capabilities of data analytics software in its processes. Specifically, a technology enabled IAF should:

  • be characterized by strong relationships at the highest levels and have a regular presence in major governance and control forums throughout the organization while maintaining its independence and objectivity.
  • have a comprehensive understanding of Governance, Risk and Compliance (GRC) framework of the business, including its strategies, products, risks, processes, systems, regulations, and planned initiatives.
  • be recognized by stakeholders as a function that provides quality challenge, drives change within the organization and can connect-the-dots across lines of business and functions utilizing technology.
  • have an integral role in the governance structure as the 3rd line, which is clearly aligned with the organization’s objectives, articulated, and widely understood throughout the organization; and
  • have a defined and documented brand that permeates all aspects of the internal audit department, IAF strategy and is widely recognized and respected both internally and externally.


Many traditional IAFs are facing challenges to concretely implement more data-driven procedures into the internal audit process ([Veld15]). Instead of focusing on tools and technology as the entry point for enablement, IAFs should consider the competencies and capabilities that are needed to utilize these tools and technologies effectively.

Technology-driven internal auditing requires a significant amount of critical thinking and understanding of data. Faced with new business processes, auditors must not only be able to quickly understand a new business process and its related data; they must also identify risks that can be quantified and understand how to create analytics-enabled procedures and visualizations of the results which address those risks. For this reason, evaluating and identifying the IAF team’s skills and competencies are fundamental to successful technology-enabled IAF.

Too often, internal auditors have been trained in the next best tool to quickly keep up to date with the speed of changing technologies, without addressing the fundamental purpose for said technologies. As a result, we are all too familiar with participating in training, forgetting most of what was learned or failed to identify the use case in daily work life within a week. Digital awareness is key for internal auditors to identify opportunities to leverage relevant training.

Technology-enabled IAFs have a staffing strategy and talent attraction plan based on their organizational structure, goals, and long-term strategy. Leading technology-enabled IAFs hire employees such as data scientists and create a fully-fledged digital internal audit center of excellence, while it is more common for emerging technology-enabled IAFs to have one or two data analytics and IT Audit specialists in their team.

IAFs that are starting their tech-enabled journey may find it difficult to balance their short-term and long-term staffing requirements. Reliance on third parties – including IT resources from another internal department, a tool vendor, audit/consulting firms or temporary contractors – is a common way to address initial, part-time, or sudden incremental needs. These auditors can enable greater flexibility and be a catalyst for implementing a more technology-driven approach.


A leading internal audit team has a technology-enabled methodology to embed data analytics, IA management applications and GRC solutions into every part of the internal audit methodology and process. To appropriately integrate technology in each step of the internal audit methodology, the IAF should partner with the organization to be able to understand the systems, data or scripts which supports business areas.

Partnerships with Risk & Compliance teams are leveraged to build joint business cases to improve business processes with data in the business. Moreover, a leading IAF team should also cooperate with IT on an operational level, while maintaining its independent role, and understanding the information that needs to be provided to receive the correct data. Each stage of the IAF’s audit methodology can use data, and prioritizing a “data first” approach will provide the required paradigm shift.

To guide IAFs on how to enhance the overall internal audit cycle, we focus on the following key stages (see also Figure 4):

  1. Planning
  2. Scoping
  3. Fieldwork
  4. Reporting, monitoring and follow-up.

In addition to this cycle per internal audit engagement, technology-enabled IAF can embark on a continuous auditing way of working. The data output of the preceding audit can be leveraged as input for the next audit.


Figure 4. Internal audit execution process. [Click on the image for a larger image]

Planning & scoping

To succeed in embedding data analytics throughout the audit process, the focus on data analytics is introduced in a risk-based planning phase. To identify points of focus during planning and derive meaningful insights throughout the audit process, a leading IAF should leverage business data, technology, analytics, and external sector relevant factors to:

  • Gain data-driven insights prior to fieldwork.
  • Enhance audit objectives with digitization of risk assessments.
  • Identify risks based and automated KPI calculations and data used for prior reporting; and
  • Take an integrated approach including all governance functions to determine a single risk source of truth.

The technology-enabled Internal Audit Function is not devoid of detailed manual testing. The IAF is aware of this and can identify what technology is best applied when testing controls and mitigating factors frequently with Computer Assisted Audit Techniques (CAATs). The business area, system, and process are the input factors to determine and approach. An experienced tech-enabled auditor assesses this continuously based on availability of data and required assurance. Leading IAFs should:

  • Identify procedural weaknesses or critical transactions using process mining, data analytics, or ERP analytics. These create meaningful and insightful observations in the audit execution.
  • Harness existing technology to automate audit procedures with prebuilt bots and routines for well-known business processes.
  • Apply internal audit management (or GRC) software to create and facilitate their methodology and templates.
Reporting, monitoring & follow-up

Internal audit reports to various stakeholders on a regular basis. This includes reporting of audit results to auditees and senior management, as well as reporting on other generic topics as guided by the IIA Standards. Written reporting is complemented by data-driven dashboards or connected web-based reports for continuous and real-time reporting. Technology empowers the IAF to monitor and follow-up by simply “refreshing” the input data. Leading IAFs should:

  • Develop an effective communication plan which could make use of web-based reporting platforms, such as KPMG Dialogue, to deliver reports which are integrated and seamlessly clarify observations with links to follow up action plans and embedded data-driven results; and
  • Consider integrated and continuous monitoring reports by visualizing the results of data analysis instead of text-based reporting, for example using PowerBI, QlikView or Tableau.

A roadmap for a large Dutch pension fund administrator

The organization is a non-profit cooperative pension administration organization. They offer their clients pension management and asset management services. They manage the pensions of various pension funds, their affiliated employers, and their employees. Looking to modernize their IAD, they developed into a technology-enabled IAF. The roadmap shown in Figure 5 considers the above-mentioned focus areas Positioning, People and Process.


Figure 5. Roadmap for technology-enabled internal audit. [Click on the image for a larger image]


Organizations are integrating increasingly advanced technologies into their way of working. IAFs are expected to mirror the evolution of organizations to remain relevant, add value and inspire the trust of their stakeholders. Each IAF will have a different journey to improve and innovate to match their organization’s technology-enablement, whereby Positioning, People and Process should be the starting point. Understanding how to Position an IAF that uses technology is essential for IAFs to continue to meet expectations by the organization. Coupled with the correct People with the right competences and skills to drive a technology-enabled audit process. The right skills and competencies are necessary, but not sufficient for an IAF to improve their function with technology. Understanding the relevance for the skills, competencies and technology in the internal audit process is critical to execution.

Where Process includes tools, options and solutions that allow IAFs to utilize data effectively and successfully as part of its risk-based internal approach and the audit methodology, IAFs must seek to keep up with developments in technology which have an impact or can be leveraged in the internal audit process. In doing so, and positioned correctly in the organization with the right people, the IAF will be able to continue to play a vital and relevant role in their organizations. A technology-enabled IAF can contribute to the fundamental shift in perspective and understanding that a dynamic risk environment presents threats and challenges not just to the organization itself, but to all the stakeholders who have an interest in the organization.


[Chu16] Maes, T. & Chuah, H. (2016). Technology-Enabled Internal Audit. Compact 2016/4. Retrieved from:

[Idem18] Idema, S., & Weel, P. (2018). Audit Analytics. Compact 2018/4. Retrieved from:

[IIA] Institute of Internal Auditors (n.d.). Profession. Retrieved from:

[KPMG21] KPMG (2021). Agile, resilient & transformative: Global IT Internal Audit Outlook. KPMG International.

[Veld15] Veld, M. A. op het, Veen, H. B. van, & Kessel, W. E. van (2015). Data Driven Dynamic Audit. Compact 2015/3. Retrieved from:

[Velt21] Veltkamp, C., & Jagesar, W. (2021). The impact of technological advancement in the audit. Compact 2021/3. Retrieved from:

Privacy audits

The importance of data privacy has increased incredibly in the last couple of years. With the introduction of the General Data Protection Regulation (GDPR), the importance of data privacy has increased even more. Data privacy is an important management aspect and contributes to sustainable investments. It should therefore take a prominent role in GRC efforts and ESG reporting. This article discusses the options to perform privacy audits and the relevancy of the outcomes.


In recent years, there have been various developments with regard to data privacy. These developments, and especially the introduction of the General Data Protection Regulation (GDPR), forced organizations to become more aware of the way they process personal data. However, not just organizations have been confronted with these developments, individuals who entrust organizations with their data have also become more aware of the way their personal data is processed. Therefore, the need to demonstrate compliance with data privacy laws, regulations and other data privacy requirements has increased among organizations.

Since data privacy is an important management aspect and contributes to sustainable investments, it has taken a prominent role in Governance, Risk management & Compliance (GRC) efforts and Environmental, Social & Governance (ESG) reporting. GRC and ESG challenges organizations to approach the way they are dealing with personal data from different angels and the way they report on their performed efforts. However, because of the complexity of privacy laws and regulations and a lack of awareness, it seems to be quite a challenging task for organizations to demonstrate the adequacy of their privacy implementation. It seems that a lot can be gained when determining whether controls are suitably applied in this regard, since there are no straightforward methods that could be applied to provide insight. The poor state of awareness and knowledge on this topic makes this even more complicated.

This article explains the criticality of GDPR in obtaining compliance, followed by a description of the various ways in which privacy compliance reporting can be performed. In addition, the role of privacy audits, their value, and the relationship of privacy audits to GRC & ESG is explained, prior to providing some closing thoughts on the development of the sector. The key question in this article is whether privacy audits are relevant for GRC & ESG.

Criticality of the GDPR in obtaining compliancy

Although the GDPR has already been implemented in May 2018, it is still a huge challenge for organizations to cope with. This privacy regulation has not only resulted in the European Commission requiring organizations to prove their level of compliance, but it has also increased the interest from individuals on how their personal data is processed by organizations. The most important principles of the GDPR, as listed in article 5 are:

  1. Lawfulness, Fairness, and Transparency
  2. Limitations on Purposes of Collection, Processing & Storage
  3. Data Minimization
  4. Accuracy of Data
  5. Data Storage Limits and
  6. Integrity and Confidentiality

The rights that individuals have as data subjects are listed in Chapter 3 of the GDPR and are translated into requirements that should be met by organizations, such as:

  1. The right to be informed – organizations should be able to inform data subjects about how their data is collected, processed, stored (incl. for how long) and whether data is shared with other (third) parties.
  2. The right to access – organizations should be able to provide data subjects access to their data and give them insight in what personal data is processed by the organizations in question.
  3. The right to rectification – organizations must rectify personal data of subjects in case it is incorrect.
  4. The right to erasure/the right to be forgotten – in certain cases, such as when the data is processed unlawfully, the individual has the right to be forgotten which means that all personal data of the individual must be deleted by the data processor.
  5. The right to restrict processing – under certain circumstances, for example, when doubts arise about the accuracy of the data, the processing of personal data could be restricted by the data subject.


A starting point for any organization to determine whether and which privacy requirements are applicable to the organization is a clear view of the incoming and outcoming flows of data and the way the data is processed within and outside the organization. In case personal data is processed, an organization should have a processing register. Personal data hereby being defined as any data which can be related to natural persons. In addition, the organization should perform Data Privacy Impact Assessments (DPIAs) for projects that implement new information systems to process sensitive personal data and a high degree of privacy protection is needed.

The obligation to possess a data processing register and the obligation to set up DPIAs, ensure that the basic principles that are required by the privacy regulation for the processing of personal data (elaborated in Chapter 2 of the GDPR) and privacy control have the right scope. Furthermore, these principles ensure that processing of personal data by an organization is done in a legitimate, fair and transparent way. Organizations should hereby bear in mind that processing personal data is limited to the purpose for which the data has been obtained. All personal data that is requested should be linkable to the initial purpose. The latter has to do with data minimization, which is also one of the basic principles of the GDPR. Regarding the storage of data, organizations should ensure that data is no longer stored than is necessary. The personal data itself should also be accurate and must be handled with integrity and confidentiality.

Organizations are held accountable by the GDRP for demonstrating their compliance with applicable privacy regulations. The role of the Data Protection Officer (DPO) has increased considerably in this regard. The DPO is often seen as the first point of contact for data privacy within an organization. It is even mandatory to appoint a DPO in case the organization is a public authority or body. DPOs are appointed to fulfill several tasks such as informing and advising management and employees about data privacy regulations, monitor compliance with the GDPR and increase awareness with regard to data privacy by for example, introducing mandatory privacy awareness training programs.

Demonstration of compliance with privacy regulations could be quite challenging for organizations and especially for DPOs. Complying with privacy regulations has been outlined in article 42 of the GDPR. However, practice has shown that demonstrating compliance is more complex than is described in this article. At this moment the Dutch Authority of Personal Data (Autoriteit Persoonsgegevens), the Dutch accreditation council (Raad voor Accreditatie) and other regulators have not yet come to a practical approach for issuing certificates to organizations that meet the requirements, due to the elusive nature of the law article. Besides the certification approach foreseen in the GDPR, there are different approaches in the market which organizations can use to report on their privacy compliance. In the next section some of these reporting approaches are elaborated on.

Reporting on privacy compliance

There are different ways in which organizations can report on privacy. Of course there are self-assessments and advisory-based privacy reporting. These ways of reporting on privacy are mostly unstructured and the conclusions subjective, however. The reports therefore make it difficult to benchmark organizations against each other. To make privacy compliance more comparable and the results less questionable, there are broadly speaking two ways of more structured reporting in the Netherlands. These ways are reporting based on privacy assurance and reporting based on privacy certification. They are further explained in the following paragraphs of this section.

A. Reporting based on privacy assurance


Assurance engagements can be defined as assignments in which auditors give an independent third-party statement (“opinion”) on objects by testing suitable criteria. Assurance engagements are meant to instill confidence in the intended users. These engagements originate in the financial audit sector. How these engagements should be performed and reported are predefined by internationally accepted “Standaarden” (standards) respectively “Richtlijnen” (guidelines) and are propagated by the NBA and NOREA.1 As part of assurance engagements, controls are tested using auditing techniques consisting of the Test of Design (ToD) and/or Test of operating Effectiveness (ToE). Based on the results of controls testing, an opinion is given on the research objects. This opinion can be either qualified, unqualified, abstaining from judgment, or qualified with limitation. The most commonly used assurance “Standaarden” and “Richtlijnen” in the Netherlands to report on privacy are: ISAE 3000, SOC1 and SOC2. ISAE3000 is a generic standard for assurance on non-financial information. SOC1 is meant to report relevant non-financial control information for financial statement analysis purposes and SOC2 is set up for IT organizations that require assurance regarding security, availability, process integrity, confidentiality and privacy related controls. Assignments based on ISAE3000, SOC1 and SOC2 can lead to opinions on privacy control. The criteria chosen to be in scope as part of an ISAE3000 or SOC1 engagement can be chosen freely as long as the choice leads to a cohesive, clear and a usable result. The criteria for SOC2 are prescribed, although extension is possible.


NOREA gives organizations the possibility to obtain a Privacy Audit Proof quality mark for individual or multiple processing activities of personal data or for an entire organization ([NORE21]). This mark can be obtained based on a “ISAE3000” or “SOC2” privacy assurance report with an unqualified opinion. The NOREA Taskforce Privacy has set up terms in which guidelines for performing privacy assurance engagements and obtaining the Privacy Audit Proof quality mark. One of the conditions for this quality mark is the usage of the NOREA Privacy Control Framework (PCF) as a set of criteria, in case of the usage of the ISAE3000, or the usage of the criteria elaborated in the privacy paragraph of an SOC2-assurance report. The Privacy Audit Proof quality mark can be obtained by either controllers or processors. After handing over an unqualified assurance report and giving relevant information, NOREA gives permission to the organization that is being successfully audited to use this mark for one year, under certain conditions.

The extent to which an opinion on privacy control resulting from an assurance engagement is the same as an opinion on privacy compliance depends on the criteria in scope of the assurance engagement. An opinion on privacy controls, although a good indicator, can however never be seen as an all-encompassing compliance statement. Due to the fact that the GDPR is ambiguous and the selection of controls in scope requires interpretation, an objective opinion on compliance by financial or IT auditors is not possible.

B. Reporting based on privacy certification

Certification originates from quality control purposes. To be eligible for a certification, an independent, accredited party should assess whether the management system of the concerned organization meets all requirements of the standard. Certification audits are meant to making products and services comparable. In addition, the strive for continuous improvement is an important part of these audits.

In general, the most commonly used certifications in the Netherlands are those originating from the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) ([ISO21]). Examples of ISO/IEC-standards are the ISO/IEC 27001 (information security management), the ISO/IEC 27002 (information security), the ISO/IEC 27018 Information technology) and the ISO/IEC 29100 (for public cloud computing). In addition, the ISO27701 has been introduced in August 2019 as an extension of ISO27001. This standard focuses on the privacy information management system (PIMS). This particular standard assists organizations to establish systems to support compliance with the European Union General Data Protection Regulation (GDPR) and other data privacy requirements but as a global standard it is not GDPR specific.

Other privacy certification standards are for example the BS10012, European Privacy Seal (also named EuroPrise), the Europrivacy GDPR certification and private initiatives, like certification on the Data Pro Code ([NLdi21]). BS10012, as the British Standard on PIMS, is mostly replaced by the ISO27701. EuroPrise provides certifications that demonstrate that for example IT products and IT-based services, comply with the European data protection laws ([EuPr21]). Europrivacy GDPR certification, as is stated on their website, “provides a state-of-the-art methodology to certify the conformity of all sorts of data processing with the GDPR”. In the Netherlands, NL Digital, as an organization of ICT companies, has developed the Data Pro Code. This Code specifies the requirements of the GDPR for data processors. Due to their specific nature the Europrivacy GDPR certification and certification on the Data Pro Code are less commonly used in The Netherlands.

C. Privacy assurance versus privacy certification

The main difference between privacy assurance and certification is that assurance is more assignment-specific and in-depth. This is illustrated in Figure 1. In this figure, the main differences between privacy assurance based on ISAE/COS or Directive 3000 and Certification according to ISO 27701 are summarized.


Figure 1. Comparison privacy assurance versus privacy certification (based on [Zwin21]). [Click on the image for a larger image]

Since the privacy reporting business hasn’t matured yet, privacy assurance and privacy certification can coexist and have their own benefits. Organizations that want to report on privacy should choose the way that suits their needs, which is dependent on their level of maturity for instance.

Privacy audits

Although a lot of knowledge and experience is available, performing an audit is an intensive process. This is especially the case for privacy audits. Since personal data is cross sectional, a separate area and not tangible, privacy audits are considered to be even more difficult.

This section describes typical aspects of privacy audits. As a model for describing these aspects, a privacy audit is considered to follow the phases shown in Figure 2.


Figure 2. Privacy audit phases. [Click on the image for a larger image]

In general, the privacy audit phases look like the phases of “regular” audits. There are a few differences, however. One of the most important differences between regular and privacy audits is the determination of the scope, which is more difficult for privacy audits. A clear view of the incoming and outcoming flows of data and the way the data is processed within and outside the organization is a good starting point for privacy related efforts, and therefore also for scope determination. The processing register and DPIAs are other “anchors” that are useful. Data flows and the processing register list what data is processed in what system and which part can be considered personal data. DPIAs can provide further insight into the division of responsibilities, sensitivity of the data, applicable laws and relevant threats and vulnerabilities. Although all the aforementioned can help, there are still a few problems to be solved. The most important of these are the existence of unstructured data and the effects of working in information supply chains.

  • Unstructured personal data is data which is not stored in dedicated information systems. Examples of this type of data is personal data stored in Word or Excel files on hard disks or on server folders used in the office automation, such as personal data messages in mailboxes or personal data in physical files. Due to the unstructured character, scope determination is difficult by nature. Possible solutions for these situations can be found in tools which scan for files containing keywords which indicate personal data, like “Mr.”, “Mrs.” or “street”. A more structural solution can be found in data cleansing as part of archiving routines and “privacy by design” and “privacy by default” aspects of system adjustments or implementations. Whereas scanning is diverted to point solutions, archiving and system adjustments or implementations can help find a more structural solution.
  • Working in information supply chains leads to the problem that the division of responsibilities among involved parties is not always clear. In case of outsourcing relations, processing agreements can help clarify the relationships between what can be considered the processor and the controller. Whereas the relationships in these relatively simple chains are mostly straightforward, less simple chains like in Dutch healthcare or justice systems lead to more difficult puzzles. Although some clarification can be given in the public sector due to the existence of “national key registers” (in Dutch: “Basisregisters”), most of the involved relationships can best be considered as co-processor relationships, in which there are joint responsibilities. These relationships should be clarified one by one. In addition to co-processor relationships, there are those relationships in which many processor tasks lead to what can be considered controller tasks, due to the unique collection of personal data. This situation leads to a whole new view on the scoping discussion, with accompanying challenges.

Other difficulties performing a privacy audit arise from the Schrems-II ruling. As a result of this ruling, processing of personal data of European citizens under the so-called Privacy Shield agreement in the United States is considered to be illegal. Since data access is also data processing, the use of US cloud providers is to be considered illegal. Although there are solutions being specified like new contractual clauses and data location indicators, there is no entirely privacy-compliant solution available yet. Considering that the US secret service organizations are not bound to any privacy clauses and since European citizens are not allowed on the American privacy oversight board, there is still a leak.

Testing privacy controls is not simple either. Of course there are standard privacy control frameworks and the largest part of these frameworks consists of security controls and PIMS. There is a lot of experience with testing these. Testing controls which guard the rights of the data subjects, like the rights to be informed, access and rectification is more difficult, however. This difficulty arises from the fact that these controls are not always straightforward and testing these requires interpretation of policies and juridical knowledge. These difficulties can of course be overcome by making it explicit that an audit on privacy cannot be considered a legal assessment. This disclaimer is, however, not helpful in gaining the intended trust.

To improve the chance of successfully testing controls, most privacy audits are preceded by a privacy assessment advisory engagement. These advisory engagements enable the suggestion of improvements to help organizations, whereas audits, especially those devoted to assurance, leave less room to do so.

Reports resulting from privacy audits are mainly dictated by the assurance or certification standards, as described in the preceding section. The standard and resulting report should suit the level of maturity of the object and the trust needed so that maximum effect can be reached.

Added value of privacy audits

Privacy audits lead to several benefits and added value. In this section the most important are listed.

Building or restoring confidence – Like any audit performed for an assurance or certification assignment, a privacy audit is devoted to help build or restore confidence. This is even more so if the privacy audit leads to a quality mark.

Increasing awareness – Whether an audit leads to a qualified opinion or not, any audit leads to awareness. The questions raised and evidence gathered make employees aware. Since the relevance of privacy has increased over the past years, a privacy audit can help with prioritizing the subject within the organization as the outcomes could eventually lead to necessary follow-up actions that require the engagement of several employees/departments within the organization.

Providing an independent perspective – As mentioned before, privacy is not an easy subject. Therefore, subjectivity and self-interest are common pitfalls. Auditors can help avoid risks related to these pitfalls by independently rationalizing situations.

Giving advice on better practices – Auditors are educated to give their opinion, based on the last regulations and standards. Therefore, the auditors’ advice is based on better practices. Since privacy is an evolving and immature business, advising on better practices has taken a prominent role in their job and provided services.

Facilitating compliance discussions – Last not but not least, although auditors do not give an opinion on compliance, they facilitate compliance discussions inside and outside client organizations, due to their opinion on relevant criteria and controls. In this respect, the auditor can also help in discussions with supervisory boards. Assurance, certification and quality marks are proven assets in relationships with these organizations.

Client case: Privacy audits at RDW

A good example of how privacy reporting can be helpful are the privacy audits performed for the Dutch public sector agency that administers motor vehicles and driving licenses “RDW”.

RDW is responsible for the licensing of vehicles and vehicle parts, supervision and enforcement, registration, information provision and issuing documents. RDW maintains the “national key registers” (“Basisregisters”) of the Dutch government with regard to license plate registration in the “Basis Kentekenregister” (BKR) and the registration of driving licenses in the “Centraal Rijbewijzenregister” (CRB). In addition, RDW is processor of on-street parking data in the “Nationaal Parkeerregister” (NPR) for many Dutch municipalities.

Since there are many interests and there is a lot of personal data being processed, RDW is keen on being transparent on privacy control. KPMG takes care of privacy audits with respect to the abovementioned key registers, BKR, CRB and NPR, as RDW’s assurance provider.

Performing these audits, there are the aforementioned challenges with regard to scope. They are dealt with by, amongst others, restricting the scope to the lawfully and contractually confirmed tasks and descriptions in processing registers and PIAs. Furthermore, due to the fact that RDW has a three lines of defense model, with quality control and resulting reports as second line, they have managed to implement privacy controls as listed in the NOREA privacy control framework.

According to the RDW, privacy reports and marks are helpful in, for example, communication to partners in automotive and governmental information supply chains and with supervisory boards. Although there is a lot of innovation in conjunction with, for example, connected and autonomous vehicles, RDW states that they are able to manage accompanying challenges with regard to amongst others privacy protection. If unintentionally something happens like a data breach, RDW is in a good position to give an explanation, supported by audit results.

Position of privacy audits in GRC & ESG

ESG measures the sustainable and ethical impact of investments in an organization based on the Environmental, Social and Governance related criteria. Previous events – such as the Facebook privacy scandal in which user data could be accessed without their explicit consent of these users ([RTLN19]) – have shown that data breaches could raise a lot of questions from investors or even result in decreasing share prices. Insufficient GRC efforts regarding data privacy could even lead to doubts about the social responsibility of an organization.

As mentioned in previous sections, there are various ways for organizations to demonstrate their compliance with data privacy regulations. The importance of presenting the way an organization is dealing with data privacy is further emphasized with the introduction of ESG, since it demands privacy to be implemented from an Environmental, Social and Governance point of view as well.

The outcomes of privacy audits could be used as a basis for one of the ESG areas. Privacy audits could provide insights in the extent to which measures are effective and a means to monitor privacy controls. Also, findings that have been identified from a privacy audit could help in ESG as they make organizations aware of the improvements that they have to make to prevent these events in the future to (re)gain the trust of all relevant stakeholders, including (potential) investors.

Conclusion and final thoughts

Although privacy audits could not provide the ultimate answer on whether organizations comply with all applicable data privacy regulations, it does offer added value. Therefore, the answer to the earlier question on whether privacy audits are relevant for GRC and ESG is, according to us, undoubtfully: “yes, they are!”

Privacy audits cannot provide the ultimate answer to whether organizations comply with all applicable data privacy regulations, however. Using privacy audits, organizations obtain insights into the current state of affairs regarding data privacy management. The outcomes of a privacy audit could also increase further awareness within the organization, as it emphasized the shortcomings that had to be followed up or investigated by the relevant parties within the organization. Next to the benefits that the organization itself will have with the performance of a privacy audit, it facilitates discussions with third parties and supervisory boards when it comes to demonstrating compliance with data privacy regulations, especially when the privacy audit has resulted in a report provided by an independent external privacy auditor. Another advantage of having privacy audits performed is that it lays the foundation for further ESG in which an organization can describe the measures performed to ensure data privacy and the way how progress is monitored. This could explain why sustainable investments are ensured at the organization in question. Privacy audits are difficult, however, since personal data are cross sectional, a separate area and not tangible.

Outsourcing and working in information supply chains are upcoming trends. These trends will offer a lot of opportunities for those who want to make a profit. To gain maximum benefit, the focus of the involved organizations should not only be on offering reliable services; they should also have a clear vision on GRC and ESG aspects. Privacy should be one of these aspects, whereas balanced reporting on all of the aforementioned is the challenge for the future.


  1. The NBA and NOREA are the Dutch bodies for financial respectively IT auditing.


[EU16] European Union (2016). Regulation (EU) 2016/679 of the European Parliament and of the Council. Office Journal of the European Union. Retrieved from:

[EuPr21] EuroPrise (2021). EuroPrise – the European Privacy Seal for IT Products and IT-Based Services. Retrieved from:

[Euro21] Europrivacy (2021). Europrivacy Certification. Retrieved from:

[ISO21] ISO (2021). Standards. Retrieved from:

[Koor13] Koorn, R., & Stoof, S. (2013). IT-assurance versus IT-certificering. Compact 2013/2. Retrieved from:

[NLdi21] NLdigital (2021). Data Pro Code. Retrieved from:

[NORE21] NOREA (2021). Privacy Audit Proof: Logo voor de betrouwbare verwerking van persoonsgegevens. Retrieved from:

[RTLN19] RTL Nieuws (2019, July 24). Recordboete voor Facebook van 5 miljard dollar om privacyschandaal. Retrieved from:

[Zwin21] Zwinkels, S., & Koorn, R. (2021). SOC 2 assurance becomes critical for cloud & it service providers. Compact 2021/1. Retrieved from:

The impact of technological advancement in the audit

In the past few decades, technology has advanced rapidly, forever changing how organizations do business. Because of this development, auditors have had to change their approach when auditing financial statements. In this article, we will explore changes to business processes and the audit approach, as well as look at opportunities to further use technology to advance the audit profession.


Organizations are changing. Information technology is increasingly being used to digitize existing business processes or create entirely new business models. What most of these organizations have in common is that they either are required to, or voluntarily, publish their financial statements. Auditors are required to verify the reliability of these statements, by performing procedures as prescribed by laws and regulations. While there have been great innovations in the way these audit procedures are performed, these were mostly focused on efficiency or convenience. Audit procedures are automated or outsourced to minimize the effort required to comply with these laws and regulations. Technology is still underutilized in increasing the insights provided by an audit or enriching the client experience. Recently, this began to change. Businesses are expecting more from their auditors, which is leading to changes in the way audits are performed, and the way the results of the audit are presented. In this article, we will look at the digitalization of business process using (ERP) technology, the state of auditing this technology, and future developments for further digitizing the audit.

Digitizing business processes and the audit

Why is technology becoming more relevant in the audit?

The information technology used in business processes is always evolving but changes significantly with new breakthroughs. The most recent development with a major impact related to the way processes are defined is the increasing availability of cloud services, including cloud-based ERP systems. Cloud ERP systems are standardizing business process and are becoming easier to use. Furthermore, the costs of using technology for business processes are declining, because the high costs of IT infrastructure and maintenance are shared through using cloud services.

This has made the use of technology available for smaller organizations, that might otherwise not have the resources to do so. It also allows small organizations to digitize their operations and shift from manual operations and controls to digital/automated operations and controls. With more organizations undergoing this digital transformation, it is becoming increasingly relevant to include the technology used as part of the audit procedures. The use of the cloud and cloud services might also introduce new IT risks which organizations need to address and should get attention during the audit, examples are the security of the cloud and data privacy.


Figure 1. Innovation trends ([KPMG20]). [Click on the image for a larger image]

Additionally, KPMG surveys on innovation trends indicate that 37% of organizations had innovation as a top priority during 2020, and 55% indicate that innovation is a top priority going forward. Therefore, we can expect more clients to start implementing emerging technologies such as continuous integration/deployment, Datalakes, Artificial Intelligence and Machine Learning, and integrating these technologies into core business processes. This will further increase the relevance of technology within the audit.

How is technology applied in audits today?

In modern audits, technology is becoming more incorporated into the way of working. Early innovations were mostly focused on convenience, such as automating complex sampling methods or tracking progress using bots. This line of innovation continued with more standard audit procedures being automated or being increasingly supported using tooling.

These innovations are mainly focused on increasing the efficiency of the work performed by the auditor. They add little value to the audit client, who might be completely unaware that these innovative solutions are being used during their audit. With increasing competition between audit firms, a shift is happening from applying technology to internal processes, to including the audit client in the technological journey of innovating on the audit. This shift in approach means that audit firms need to rethink the types of innovations being developed, and the way they are used in collaboration with the client.

The audit of the future

What automation is available today?

The shift in focus from innovating audit procedures, to including the audit client in new audit innovations has led to several new innovations becoming available where audit clients can directly interact with the technology used in the audit or are presented with the results from our audit automation. An example of this are collaboration portals. Collaboration portals create an environment where both auditors and the audit clients can work together during the audit process. By using these tools both the auditor and audit client can see what information has been requested, and of what employees. This ensures that both parties are aware and aligned of the progress made.

To gain a better understanding of processes, and how information flows through the organization data, analytics can be used to follow accounting records through the organization’s business processes. This can be performed as an audit procedure to verify that all information is part of the financial statements. Another approach is to use process mining which uses the data to model how the processes of the client actually work. By comparing the model to the flowchart created during the audit walkthrough it is possible to verify that client processes are working as designed.

Dashboarding is used more frequently to present the insights gained from other innovations. By presenting information on audit results or progress in a concise, and visually appealing dashboard, the audit client can be better motivated to take action. This can be done either by supporting the audit progress to prevent delays, or improving certain aspects of their business based on audit results.

What new audit innovations are on the horizon?

The increase in use of (Cloud based) ERP systems is increasing the volume of data available at clients, as well as improving the data quality as it is contained in a single system. This will enable the development and use of more audit tooling that use this data as input.

One of the terms frequently used when talking about the future of auditing is Continuous Monitoring and Continuous Auditing (CM/CA). This is an initiative that would move away from the auditor just reporting their findings on internal controls once a year during their interim procedures. The ideal situation: report findings or non-compliance with existing procedures as they are happening. To realize this, a large degree of automation needs to be implemented at the audit client so that (the auditor’s) monitoring tools can detect non-compliance as it is happening. This, combined with the complexity of connecting several audit client systems with the auditor’s tooling, makes it difficult to implement.

However, the first steps towards CM/CA have been made by creating technology for the automation of audit procedures. KPMG is currently rolling out several initiatives for automated testing of the audit clients internal controls. The first of these initiatives is a solution for automated testing of General IT Controls (GITC) within a SAP (ECC/S4) environment. The solution Sapphire allows for automated testing of controls based on standard reports generated from the audit client’s system. By combining this with automated data extracting tools, it becomes possible to frequently export the required data from the audit client’s systems and have a solution like Sapphire provide the results for GITC testing.

The second solution takes a different approach to automation, focusing on providing incremental innovations on a large scale. KPMG’s Intelligent Platform for Automation (IPA) is a platform that can be used to host bots that are designed using a variety of coding languages or low-code tools. The bots from the platform are available globally so that auditors from all over the world can collaborate in designing new automated bots. After quality review, these bots can then be used on audits all over the world in local audits. The main benefit of the platform is that it also encourages the development and sharing of smaller scale innovations such as automating individual audit procedures or providing new insights to the client, such as cash proofing, three-way matching or dashboarding audit results.

Lastly, more insight can be provided to the client by comparing the results of the audit with external information, such as benchmarking. By standardizing reporting based on the above solutions it becomes possible to create an (industry) benchmark. This benchmark can reach beyond simple financial information to internal control, or process efficiency. By comparing the results of an audit with this benchmark, the client can gain new insights into their competitive position compared to other organizations within the same industry.

What are the challenges when realizing the audit of the future?

To realize the audit of the future, we need to overcome a number of challenges. The first is to bridge the gap between the new way of auditing and existing auditing standards. There is one aspect of the audit that cannot be automated, and that is the judgement of the auditor. The reality of business processes is that deviations will occur, either because a certain transaction doesn’t fit within the defined processes or because disruptions in the technology led to disruptions to the business processes. When these deviations occur, or when they do not, it remains by audit standards the responsibility of a human auditor to provide judgement. To realize more automated audits, this gap between audit tools and standards needs to be bridged. As audit automation takes flight, this will likely to be an often-discussed topic between auditors and accounting oversight boards.

The second challenge is of a very practical nature. Creating tools to automate auditing requires a significant upfront investment and will likely not yield profit until several years of operation. This is less of an issue when auditing organizations use widely used off-the-shelf systems, as the cost of development can be shared across several audit engagements that involve the same system. This issue can be partially solved using low-code development, which typically requires less time and investment to create a piece of automation. Combined with code/bot sharing platforms such as IPA, this allows for more efficient smaller pieces of automation for organizations that have more unique systems.

Finally, there are many technical challenges to overcome related to data integrity and ensuring that the results of the automated procedures produce consistent quality results. The audit of the future therefore requires an auditor of the future. The current skill set of accounting knowledge needs to be supplemented with some form of data analytics or coding skills for the auditor to automate their more routine work and ensure its reliability. This can be accomplished by teaching auditors a coding language such as SQL or Python or implementing low-code tools such as Alteryx or Disco that are more beginner friendly.


New innovations and emerging technologies are increasing the relevance of auditing technology within financial statement audits. On the other side, client expectations are driving auditors to become more innovative within the audit, requiring the use of these same innovations and technologies. This still has its challenges due to external regulations, upfront costs, and changing skillset requirements. However, we are already seeing the impact with positive feedback from clients where we are using (or proposing to use) new technology within the audit.


[KPMG20] KPMG AU (2020). Innovation Trends 2020. Retrieved from:

SOC 2 assurance becomes critical for cloud & IT service providers

Cloud and IT service providers that want to proof their performance in critical, non-functional areas such as Security, Availability, Processing Integrity and Confidentiality can leverage the SOC 2 Framework. This US/Canadian framework is becoming the de-facto international standard for providing assurance to client organizations. This article describes the SOC 2 components and benefits for service providers and user organizations, as well as the lessons learned when implementing and migrating to this framework.

In the extended online version, you can find additional details on the differences between ISO certification and SOC 2 assurance, how to request or review a SOC 2 report and how SOC 2 Privacy relate to GDPR.


Major cyber security incidents, such as the hacks at Solarwinds, Uber, Equifax and Microsoft and ransomware at Maersk and Maastricht University, have increased the awareness at senior management that there is an urgent need for improving their security and availability – internally as well as at their business partners.

In addition to external and internal IT incidents, the Covid-19 pandemic also highlighted the third-party dependencies. Organizations expect their partners to be even better secured and prepared, especially those partners that are providing critical services, such as their cloud and IT service providers. These service providers are not only contractually required to implement and maintain strong controls but will also increasingly be requested to ensure and demonstrate compliance for the outsourced processes. Providing assurance by service providers to their clients is a means to this end.

Market trends & (IT) assurance needs

Traditionally, service providers have demonstrated their service quality – in ascending order – by:

  • periodic service level reporting;
  • annual testimonies such as an ISO 27001 or even an ISO 20000 certification;
  • assurance over transaction processing through ISAE 3402 assurance.

The latter is related to the financial statement audit, has an emphasis on automated and manual process controls, supporting General IT Controls on related financial systems. For organizations that rely substantially on cloud and IT service providers, the three options above are insufficient for controlling their outsourced activities. These options partly or at a high level cover areas such as cyber security, business continuity, confidentiality, or processing integrity beyond the financial systems or beyond the implementation of controls. For example, the operational effectiveness of security controls across an organization is outside the scope of these options.

In recent years, the growing tendency of migrating to cloud platforms has given an impetus to service providers to broaden their scope for risk management and IT assurance. Public cloud solutions offer opportunities for standardized, scalable, highly available solutions that enable organizations to decrease the cost of control and increase flexibility, especially compared to housing and hosting solutions. All service providers were or are considering how cloud platforms would fit in their IT strategies.

Cloud services are offered in a wide variety: a public or a private cloud, or a hybrid solution. Many “as a Service” providers emerged leveraging cloud solutions. Well-known examples are Software-as-a-Service (SaaS), Platform-as-a-Service (PaaS) and/or Infrastructure-as-a-Service (IaaS), but there are also various intermediate solutions. Consequently, with such large variety on offer and new solutions tending towards cloud, IT landscapes of clients are becoming increasingly complex. Current IT landscapes are no longer based on a single hosted solution, but on a myriad of multi-vendor solutions. Service providers, in turn, also have contracted out specific services to other “subservice providers”. This increased complexity affects the IT Supply Chain and Vendor Risk Management.

In such a complex IT landscape, it becomes more and more challenging for clients to maintain control over their data, and they will seek additional guarantees from their service providers. A number of universal criteria apply, and the following questions arise:

  • Are my systems and data sufficiently secure from outside and inside threats?
  • Are they available when I need them?
  • Who can access confidential data?
  • Is it processed correctly?
  • How is privacy managed on a global scale?

To respond to these questions, the service provider needs to increase its transparency, which goes beyond whether agreed-on KPIs have been achieved and communicated via a service level report. Some service providers are wary of a higher degree of transparency, as it may lead to follow-up questions and maybe even too much (operational) involvement – if they were an internal IT department. Another factor that plays a role in the increased scrutiny of service providers is the external pressure of cyber threats, disruptions, specific legislation (e.g. privacy, critical infrastructures) and regulation (e.g. outsourcing in financial services) and specific supervisory requests.

For such situations, an independent third-party statement (“opinion”) on whether all control objectives are achieved regarding the services provided, is an implementation of profound transparency. This addresses all key aspects based on which the providers deliver their services, notably: Infrastructure, Software, Hardware, People and Process.

For this purpose, professional associations in several countries have set up initiatives to develop a standard set of controls for the core IT processes. None of these national frameworks achieved international recognition. Almost 25 years ago, the American Institute of Certified Public Accountants (AICPA) along with the Canadian Institute of Charted Accountants (CPA Canada) developed standard frameworks, such as WebTrust and SysTrust. Approximately 10 years ago, the AICPA and CPA Canada introduced the System and Organization Controls (SOC 2) standard for Service Organization Control Reporting based on the Trust Services Categories (see Figure 1). The last couple of years, this standard has been updated (2017) to align it more closely to the COSO model and moreover, the approach is getting traction in Europe.


Figure 1. SOC 2 Trust Services categories. [Click on the image for a larger image]

For addressing the abovementioned questions, the SOC features the following so-called Trust Services Categories: Security, Availability, Processing Integrity, Confidentiality, and Privacy. They will be briefly outlined below.


The weakest link of the “information chain” can be the primary attack surface for malicious parties. Therefore, despite the fragmentation in IT services, organizations need strong managed security services, which adhere to (inter)national security standards across the entire chain – without any blind spots. During the last revision, the AICPA has extended the SOC 2 Framework with Cyber Security as additional Trust Service Category ([AICP17]) to report on.


Dependency on multiple service providers can affect an organization’s availability to provide services, especially if its requirements are not clearly defined and safeguarded. The scalability of cloud platforms opens a window of opportunity to increase and decrease processing capacity in a very flexible manner. An organization has to specify clear capacity requirements, especially when it comes to peak workloads in data processing. Especially for services where high availability is critical, outages can be costly and sometimes even disrupt a major part of business or even society.

Processing integrity

In case of complex data processing, organizations need to understand the controls that the service provider has in place to safeguard the integrity of data processing. This may include formalized validation controls regarding input, output and throughput. The service provider also needs to ensure the stability and integrity of stored procedures for automated processing as well as the parameters applied.


For the confidentiality of its data, an organization may seek additional guarantees beyond the contract clauses. Not only the IT Management of the organization that has outsourced, but also its Board, the Chief Data, Security and Privacy Officers, Internal Audit and even regulators may require transparency about the controls that the service provider has implemented for safeguarding data confidentiality. Confidentiality is addressing all types of sensitive data, such as about financials, mergers, intellectual property, etc. and not specifically personal data (see next).


Privacy legislation, such as the European GDPR, requires organizations that are responsible for the personal data (as “controllers”) to implement and maintain stringent personal data protection, especially if it concerns sensitive personal data. The more technical privacy requirements also extend to its service providers (as “processors”) with whom standardized Data Processing Agreements have to be established and concluded. In certain cases, stakeholders may require an attestation of the implemented privacy controls. See the separate section on the relationship between SOC 2 Privacy with GDPR.

In each of the abovementioned categories, the relationship, responsibilities and a joint understanding of the precise services provided need to be clearly defined. The same is true for the control framework, as organizations and service providers predominantly use their internal, proprietary sets of controls. Aligning the services and controls definitions and overcoming conflicting interests can undermine the forecasted synergies outsourcing to cloud / IT service providers can bring.

The benefits of SOC 2

The SOC 2 framework offers service providers a comprehensive, standardized baseline of controls for the services provided. By using SOC 2 report, organizations can manage its outsourcing risks and obtain insight in the effectiveness of the controls at their service providers. A SOC 2 report can also cater for the information needs of a broad range of (other) stakeholders.

The latest version of the SOC 2 Framework blends control over Technology, as well as control over the service provider entity – based on the well-known COSO model for Enterprise Risk Management (ERM). Hence, the control over services is extended to the entire system of Internal Control from which the services are delivered. This integration of system-level controls with entity-level controls provides a steering mechanism over key components and aspects that make up the service delivery. The Control Framework not only focuses on the service delivery processes and the generic (IT) management processes, but also on the quality system within which these are embedded. For example, control over required skills, education and training, and control over vendor and client relations and ethics, raise the bar for the execution of the service delivery processes. Control over information flows, Board involvement, risk management and monitoring, further strengthen the consistency and quality of service delivery at service providers.

The SOC 2 Control Framework allows the service provider to select the Trust Services Criteria that are of interest to its clients. One or more criteria can be selected; however, the common criteria that entail the COSO principals as well as the core controls over Security, are always included mandatorily (see Figure 2).


Figure 2. SOC 2 Trust Services criteria (including common COSO ERM criteria). [Click on the image for a larger image]

In the latest version of this SOC 2 Framework, a less prescriptive approach for controls has been taken in order to cater for a wider array of service organizations, not necessarily limited to IT service providers. A variant of the Framework also made it suitable for logistic processes (SOC 2 for Supply Chain) and for data integrity and software development.

The Framework sets a baseline through the requirement to fully implement Trust Services Criteria. Applying the Trust Services Criteria in actual situations at a service provider requires professional judgment. Therefore, the organization has the flexibility to shape its control environment based on its own risk assessment – using a set of pointers: so-called “points of focus”. These points of focus represent important characteristics of the criteria and provide support for designing, implementing, and operating the controls. A well-defined control may serve multiple criteria (Control Objectives) in the Framework.

The mandatory element ensures that organizations know that what is presented to them is a complete set of controls to cover the attested Trust Service Criteria.

SOC 2 benefits for service providers

As SOC 2 is service-oriented, the standard has a number of clear benefits for cloud and IT service providers:


For service providers, the SOC 2 Framework enables them to provide the transparency over the service delivery to their clients and other stakeholders over non-functional but critical IT subjects that are on the management agenda or even Chefsache nowadays. Security is a Top-3 topics in Board rooms, and the control thereof a continuous challenge. By being transparent, service providers can demonstrate their control over processes related to their services. as the SOC 2 assurance can be of added value in the pricing of their services, this will open opportunities to distinguish operations and acquire new clients.

Extend the service portfolio

The increased focus on security and business continuity can be a driver for extending the service portfolio, either by adding more depth to existing services or through providing additional services. For example, implementing state-of-the-art role-based access provisioning on its management platform by the service provider could be an interesting proposition for its client’s (internal) IT environment.

Integrate control over technology with internal control

The most important benefit appears not to be related to its service delivery but to its control thereof. The combination of technology-driven controls and the common criteria as derived from the COSO ERM Framework, allows the service provider to take a systematic, integrated approach to its service delivery. The controls over technology, such as the management of firewalls, security baselines and authorizations are bolstered by additional controls that address risk management. Moreover, these technology controls can be linked to entity-level controls that govern the service provider, such as ensuring the appropriate flow of management information, having skillful and appropriately trained personnel and introducing tactical monitoring processes.

The Control Framework assists in operationalizing the management’s risk assessment, ensuring an appropriate control environment, and that monitoring activities are in place. In this way, an effective Plan-Do-Check-Act cycle can be established, not only for continuously improving the control over the services but also for improving services.

How SOC 2 serves (user) organizations

For clients of service providers, a SOC 2 report provides valuable insights in the Service Organization’s internal control. This has the following benefits:

Extending risk management to critical third parties / service provider

For clients of cloud/IT service providers, SOC 2 assurance allows them to obtain a comprehensive view on risks and controls, beyond the boundaries of their own organization. Through a SOC 2 report, the client organization receives significantly more information on how the service provider performs its services. Moreover, the organization will be in a better position to realistically assess its performance with internal “self-inflicted” standards and/or externally required standards.

The latter is increasingly the case, especially in the Financial Services sector where the Dutch Central Bank (De Nederlandse Bank, DNB) in its Assessment Framework wants to be informed about how service providers have organized their internal (IT) controls across the entire information supply chain. This goes beyond what is reported traditionally in service level reports or even ISAE 3402 reports.

Vendor / Third-Party Risk Management

DNB requires financial institutions to actively pursue third-party risk management. This type of risk management stipulates requirements on how organizations need to manage and monitor services by third parties. Third-party risk management consists of risks and controls for adhering to laws and regulations, ethical standards, industry standards, data classification – while taking into account the risk impact and risk tolerance. As the client organization is held accountable for all of its information, regardless of where it resides or who is processing it in their name, the client organization is expected to not only assess risk within its own organization but also to assess it beyond its own boundaries and across the entire supply chain.

Research performed by DNB (see [DNB19]), showed that the extent to which internal control at third parties was pursued scored a meagre rating of 1 out of 5. Few organizations actually had a clear view on third-party risks. Following these results and the wider implementation of Solvency II, third-party outsourcing has received significant additional scrutiny; insurance companies are expected to review their outsourcing relationships and report to the DNB as supervisory authority.

Inspiration for continuous improvement

The SOC 2 report also emphasizes, more extensively than is the case in regular ISAE 3402 reports1, the so-called “user entity control considerations” that are to be met by the client organization in order to rely on the report. This report section and the description of the services system may also serve as an inspiration and guideline as to how the organization could improve its own control environment and specific (boundary) controls. The SOC 2 standard provides a comprehensive, but also conclusive set of criteria to address in order to have an enterprise-wide span of risk management and control. Organizations may vary in size, scope and complexity and consequently will choose different controls; however, the principles remain the same and will also provide the client organizations with opportunities for continuous improvement.

Standardization and comparability

The SOC 2 methodology is prescriptive, all criteria of a selected Trust Service Category need to be addressed, including the common (ERM) criteria. This will ensure that service providers cannot cherry pick in terms of which well-performing controls are described in its system. Although the service provider is free to design its own controls, the obligation to include all criteria facilitates the completeness of the information provided over its internal controls. It allows for assessing the service provider’s performance.

The common goal: building trust

Management and scientific publications refer to “trust” as a crucial pillar of any transactional relationship to work, which is no different in the relationship between service provider and its clients. SOC 2 reporting provides the opportunity to inspire trust, as the relationship moves from having faith in a service provider based on operational service level reporting to a much more standardized and informed way of placing reliance on the service provider. Especially when critical services are outsourced.

SOC 2 reporting will provide better understanding and transparency, allowing both parties to deepen their relationship and increase the predictability of activities in the relationship by controlling alignment and standardization. Other side effects can be the lowering of the transaction costs of outsourcing and achieving control over the entire supply chain, which can be supported by automation. The use of sophisticated (GRC) tooling is indispensable in managing security and availability in hybrid IT landscapes.

Finally, as service providers become more maturity with respect to security and controls, user organizations become more skilled in defining the scope and aligning the internal and provider’s controls when requesting for SOC 2 assurance or receiving the SOC 2 result. See also the separate section on “What to request in and how to review a SOC 2 report”.

Implementing SOC 2

For organizations that consider implementing SOC 2 assurance, there are a number of considerations.

Enterprise-wide impact

SOC 2 is definitely not adopted overnight, it will take considerably more effort than achieving ISO 27001 certification (see also the separate section on SOC 2 assurance vs. ISO 27001 certification). It involves aligning and assessing the entire system of internal control, and requires a structured, control-based approach at each level related to managerial to operational service delivery. All the way up to managerial level where points of focus such as “tone at the top”, “board independence” and “skill diversity” need to be addressed. Without a well-structured, entity-wide approach, the implementation of a SOC 2 Control Framework that satisfies the ERM-type common criteria will be hardly possible.

Consider your maturity

The nature of the SOC 2 Framework more or less demands the service provider to function as an integrated unity. To implement such a framework, we recommend that the organization has experience and at least a basic maturity in internal (IT) control. The SOC 2 Framework is extensive and will only lead to benefits if controls can be demonstrably complied with in design, implementation (Type I) and operating effectiveness (Type II). Therefore, staff awareness and experience in executing and documenting controls and controls execution is essential for success.

Performing an internal SOC 2 self-assessment and/or external gap analysis can show your controls readiness, but also your own maturity to embark on a formal SOC 2 attestation.

Scope is key

To design an effective system of internal control that covers the service delivery context, defining the services in scope is a key element. Once the provider has clearly defined its service commitment to its clients (based on contracts & SLAs), it can derive the “system requirements”. The requirements consist of frequency and procedures for performing internal controls that satisfy the criteria in scope. The controls can then be designed based on these requirements along the axis of People, Process, Infrastructure, Hardware, Software and Data.

Redundancy with other frameworks

When implementing the SOC 2 Framework, a service provider may find redundancy with already applied standards, such as ISAE 3402. Both frameworks address General IT Controls on the processing of financial data, the overlap in this area can be as high as 95 percent. However, the service provider should carefully consider whether adopting SOC 2 could coincide with abandoning ISAE 3402 over time. The standards address different perspectives: ISAE 3402 of completeness and accuracy over financial reporting, while SOC 2 addresses internal control over the services provided. The scope and purpose may differ, but the controls may be similar. Even so, the audit object and the objective of the assurance reports are not (entirely) similar. The risk perception of control deficiencies in a financial 3402 context or in an SOC 2 IT Control context also varies.

Depending on the precise scope and assurance needs of user organizations, a SOC 2 report – with additional criteria (see Figure 2) – could eventually also succeed an ISAE 3402 report, preferably with only a single year overlap.

Phased approach

As the SOC 2 Framework allows for facultative adoption of the different criteria within a selected Trust Services Category, we recommend a phased approach when implementing SOC 2. The Common Criteria alone, mandatory in any SOC 2 report, involve 9 criteria classifications, which in total contain 33 criteria to be addressed and a total of 197 points of focus (directives for control design). Those Common Criteria are usually associated with the Security Trust Services Category. Any additionally selected Trust Services Category will result in an even higher number of criteria to be addressed and audited. However, not all points of focus need to be taken into account, only the relevant ones.

One can imagine that it not only requires significant effort and time to design and implement a framework that addresses all criteria; it is almost inevitable to prioritize and if needed, eventually delay inclusion of additional Trust Services Categories and criteria.

Managing expectations

Besides the lead time for implementation, it is important to manage expectations of client organizations. Laying the foundations for an effective SOC 2 Framework requires a team project, even when client perception and patience for the implementation is limited. Even with an orchestrated effort, it may take at least 6 to 9 months before the full set of controls has been implemented. If clients require results earlier, an intermediate step could be worthwhile, such as providing assurance over a small set of controls under the 3000A Directive / ISAE 3000 Standard.

Strong 2nd line

The extent and required effort of the service provider to achieve a successful SOC 2 Framework deployment makes a strong 2nd line function that can monitor, facilitate and report on the control performance almost indispensable. Even in mature organizations such a 2nd line is critical to ensure that all tactical risk management, compliance and control processes are executed. All levels need to be involved for such an enterprise-wide system of internal control.

Tooling for efficiency and (meta) control

Service providers can deploy tooling in support of managing their system of internal controls. Tools can contribute to detailing the responsibilities, linking the SOC 2 Control Framework to services and processes, planning and monitoring certain tasks, and to documenting controls testing. Furthermore, a tool facilitates the aggregation and reporting to management and supports the 2nd line functions in their roles.

Strong Governance, Risk & Compliance (GRC) tooling is offered by multiple software vendors, several also integrate with Service Desk activities (see [Lamb17]). The advantage of this integration is that it prevents duplicated efforts and can act as a “single source of truth” with real-time information on the state of controls and control performance.

An alternative approach

Assurance on internal controls over the cloud services provided with a SOC 2 assurance report has clear benefits that ultimately facilitates the focus on the quality of services. Applying the Trust Services Criteria can have a lasting impact and provide service providers with the capability to prove its reliability as a business partner, while achieving internal harmonization of processes and control information.

However, full implementation of the SOC 2 Framework is challenging due to its standard form and size. Not all organizations have the capacity or financial strength to afford this type of assurance over such a control framework. In these circumstances, service providers may choose to adopt specific criteria to incorporate in their own custom control frameworks which can be audited according to the 3000A Standard/Directive. The downside is the lack of completeness and comparability.

Service providers that cannot afford or have no strong demand for IT assurance were for long dependent on various stand-alone ISO certifications, usually in domains such as security (ISO 27001 and 27002), IT Management (ISO 20000) or cloud security (ISO 27017). For those service providers, the initiative CSPCert of the EU agency for Cyber Security (ENISA) can be interesting (see [ENIS20]). CSPCert will introduce a new cyber security baseline certification for cloud security which is based on six existing cloud and security standards and can be attested at several certification levels. This new cloud certification may have less focus on internal controls and provide less assurance in comparison to SOC 2 assurance (esp. the SOC 2 for Cybersecurity) but can provide the opportunity and flexibility to obtain comfort about cloud security for a broad target group.


With the increasing complexity in IT landscapes and outsourcing relations, there is a growing need for assurance over cloud services, especially concerning critical elements such as Security, Availability and Confidentiality. ISO certification is insufficient to provide organizations with reasonable assurance. The SOC 2 standard, introduced in Northern America, satisfied this need in a standardized manner. Just like with SAS 70 in the past, we expect that this US/Canadian-based assurance standard will eventually become a global standard.

When applying the SOC 2 Control Framework, a structured implementation scope and strategy is crucial. We recommend service providers to identify and harmonize the assurance needs of user organizations in order to avoid the obligation to provide overlapping assurance reports (ISAE 3000/3402 & SOC 2) for multiple years, as well as ending up with a qualified opinion. We expect that cloud and IT service providers that offer critical services and/or in regulated sectors can anticipate for a SOC 2 future.

What to request in and how to review a SOC 2 report?

Obtaining a SOC 2 report is not a tick in the box, it needs to be carefully aligned and reviewed with respect to your assurance needs. If you are in the driver seat to make your cloud service provider provide a SOC 2 report, you need to consider more than just the Trust Services categories that need to be included – as discussed in this article. If you are the recipient of a SOC 2 report, you can specifically focus on:

  • Reporting period: The reporting period should fit your fiscal year. However, if there is a gap between your fiscal year and the reporting period, for instance the 4th quarter of a calendar year, you probably need to obtain information on the relevant controls for the missing 1 to 3 months without assurance. In most instances, a “bridge letter” provided by the service provider will suffice. However, this bridge letter is a self-declaration and sometimes misses the critical December month with many year-end changes. Although additional independent testing is a less viable option, this approach may deemed to be required in case of a qualified opinion or major migrations outside the SOC 2 reporting period.
  • Scope: the scope of SOC 2 reports can include all or a specific set of generic services that a cloud service provider can offer, although you may not have contracted services such as data archiving, etc. Likewise, you may use specific services, such as SOC/SIEM monitoring, or locations/environ­ments that are actually outside the SOC 2 report.
  • Other service providers: when your service provider has contracted other (sub) service providers for its offering, which in turn may also have engaged other parties, you need to scrutinize whether either the SOC 2 assurance includes the most critical parties (“inclusive”) or obtain assurance reports of those parties as well (in case of “carve-out”). In both instances, the reader should be informed by the report how all parties cooperate for the joint service offering. And be aware (see also the section with SOC 2 – ISO 27001 comparison), SOC 2 assurance cannot rely on an ISO 27001 certification for sub-service providers in scope!
  • System description: not the auditor but the service provider itself is responsible for the description of its organization, governance, processes, risk management, monitoring, etc. However, it is not meant to be a marketing brochure with hyperbolic statements as the auditor will have to ascertain that this situation is fairly described and actually in place.
  • Key changes: as part of the system description, it should be crystal clear whether or not changes to the services, environment or control framework has taken place during the audit period.
  • Your own controls: the SOC 2 report usually contains a listing of so-called “complementary user entity control considerations”, in other words, what is expected from your organization to have in place to properly living up to your end of the sourcing agreement and be eligible to rely on the SOC 2 results. A weakly organized or loosely controlled client organization cannot expect the cloud service provider to compensate for you. So, if your organization is not up to par, your control weaknesses may be worse than any exceptions in a SOC 2 report. Moreover, it is important to match the SOC 2 controls to these “control considerations” and you own control framework to validate that they are aligned.
  • Opinion and exceptions: of course, most readers immediately search for a (positive) opinion and any exceptions. This selective perspective may prove its value for the speed-reading C-level executive but may not cater for assessing the impact of any exceptions on their own systems and controls. In some cases, the auditor may report compensating controls, which in place may reduce your risks sufficiently.
  • Audit firm & auditor: you may want to verify whether the auditor is sufficiently trained, experienced, and adhering to professional standards and guidelines in cloud environments and SOC 2 assurance?
  • Management response: the service provider may comment on any exceptions, not to downplay them, but to provide more context and remediation activities. In addition, this separate section in the report may be used for indicating other upcoming initiatives. These are “forward-looking statements” and not scrutinized or validated by the auditor, so be cautious that this text is not overly positive wishful thinking!

Based on this listing, it would be beneficial to let the SOC 2 report be reviewed by the responsible (senior) management, the Service Level Manager as well as your internal and external IT auditors.

How do SOC 2 assurance vs. ISO 27001 certification align or differ?

The similarities and differences between IT Assurance and ISO certification are still cause for confusion. Without spending an article on this comparison, we have summarized the main aspects of both in the table below, where we used Security and ISO 27001 as most closely aligned.


The above evaluation matrix is based on [Koor13], which elaborates on the similarities and differences between IT assurance and ISO certification.

In addition to the above comparison, cloud service providers may want to use SOC 2 assurance in addition to ISO 27001 certification to fulfil the requirements of all its clients and prospects. Cloud service providers also have the option to be certified against the ISO 27017 or ISO 27018 standards, an addendum to the ISO 27001.

The ISO 27017:2015 is the “Code of Practice for Information Security Controls” for cloud service providers, the ISO 27018:2014 is the “Code of Practice for protection of Personally Identifiable Information (PII) in public clouds acting as PII processors”.

Similarly, the ISO 27701 standard (“Security techniques — Extension to ISO/IEC 27001 and ISO/IEC 27002 for privacy information management”) can be used on top of an ISO 27001 certification for all other environments. The ISO 27701:2019 outlines the Privacy Information Management System (PIMS), which includes controller- and processor-specific controls.

See further the text box on SOC 2 Privacy for details on GDPR assurance.

Can we leverage SOC 2 Privacy for GDPR assurance?

One of the Trust Services Categories focuses on privacy; however, would it suffice for a cloud service provider to prove its GDPR compliance? In short, the answer is “No” for multiple reasons:

  • The SOC 2 Privacy category is not (fully) aligned to the GDPR, which is understandable as it was developed some 10 years ago and originated in North America. It also needs to be universally applicable, not specifically tied to any specific legislation. Just like the ISO 27701 (see text box on ISO certification), it contains an appendix in which the privacy controls are mapped against the GDPR principles.
  • So far, neither any of the national Data Protection Authorities nor the European Data Protection Board has approved any form of GDPR certification or assurance as stipulated in Articles 42 and 43 of the GDPR. The German Europrise privacy certification is potentially the first eligible to obtain formal European approval.
  • Technically speaking, GDPR compliance is by definition impossible to attest to as this European regulation is based on open norms. An SOC 2 Privacy engagement can provide assurance on the privacy controls as designed, implemented and operating, but never state that a process or entire organization is compliant with legislation.

However, you can expand the scope of a SOC 2 Privacy report with an “Additional Subject Matter” to let this so-called SOC 2+ report also include coverage of the delta of missing GDPR-based privacy controls.

A related misunderstanding is that Privacy and Confidentiality categories are overlapping: Confidentiality is distinguished from Privacy in that the latter applies only to personal information, whereas Confidentiality applies to various types of sensitive information. Confidential information may include personal information as well as other information, such as trade secrets and intellectual property. While both categories cover the information life cycle of collection, use, retention, disclosure, and disposal, the Privacy category is significantly more extensive, and only addresses personal data. Conversely, the Confidentiality category addresses the cloud service provider’s ability to protect information that they designate as confidential, which could include personal data.


  1. ISAE 3402 reports are also referred to as SOC 1 reports. For the sake of completeness, SOC 3 refers to the web seal for marketing reasons on the service providers website, only when an unqualified SOC 2 report has been issued.


[AICP17] AICPA (2017). SOC 2 for Cybersecurity. Retrieved from:

[Beek13] Beek, J.J. van & Gils, H. van (2012). Nieuwe ontwikkelingen IT-gerelateerde Service Organisation Control-rapportages, SOC 2 en SOC 3. Compact 2013/2. Retrieved from: (in Dutch)

[DNB19] DNB (2019). Beoordelingskader Informatiebeveiliging. Retrieved from: (in Dutch)

[ENIS20] ENISA (2020, December). European Cybersecurity Certification Scheme for Cloud Services. Retrieved on 22 March 2021 from:

[Koor13] Koorn, R.F. & Stoof, S. (2013). IT-assurance vs. IT-certificering: wat biedt mij (voldoende) zekerheid? Compact 2013/2. Retrieved from: (in Dutch)

[Lamb17] Lamberiks, G.J.L., Wit, I.S. de & Wouterse, S.J. (2017). Trending topics in GRC tooling. Compact 2017/3. Retrieved from:

Risk/control framework related to the security of mobile business applications

There is an increasing number of business applications on corporate mobile phones these days. There are threats to consider and risks involved when releasing corporate financial or other types of sensitive information on the go. Enterprises are responsible for mobile application and device security in their day-to-day business. In case of a loss of sensitive organizational data, management is held accountable. This article explores the risks surrounding the use of mobile business applications by employees and the measures organizations can take in order to become and stay in control.


Organizations across the globe are developing or using mobile applications in order to increase employee productivity. Mobile technology fulfils an increasingly important role within business processes. Users have been familiar with mobile access and apps for a while to retrieve real-time information anywhere at any time. It is therefore no surprise that more organizations are providing access to financial and customer information via mobile devices.

As a relatively new way of working, this may pose additional challenges (e.g. lost/stolen mobile devices, mobile security) and consequently, result in security risks. In addition, already existing challenges such as malware, employee security awareness are also still applicable and cannot be neglected. Therefore, a strong focus on mobile application security and the accountability of management for the underlying risks is required. When identified risks are not addressed sufficiently through detailed risk assessments and implementation of a coherent set of security measures, sensitive financial data may become compromised. Using mobile applications in organizations raises the question whether additional measures – in comparison to regular online business applications – are needed to cover the security risks related to mobile business applications.

Recognizing these risks, organizations will become more aware of their responsibility for mobile application and device security. A risk/control framework will help organizations to guide their approach to control all risks involved and to be able to continuously test and update the security posture. This article focuses on securing mobile business solutions by determining which risks exist and how these could be controlled.

Relevance of securing mobile business applications

Mobile devices have evolved from merely providing access to enterprise e-mail to removing logistical process delays and tracking business transactions at any time through the use of mobile business applications. For example, these applications provide the organization the ability to review, approve or reject purchase requisitions and supplier invoices, continuously follow their order backlog and keep track of their Order-to-Cash and Purchase-to-Pay process performance trends in real-time. Mobile applications are rapidly becoming the primary – and maybe even single – communication channel for customers and employees and therefore the applications are expected to be secure and to protect privacy ([Ham17]). Despite numerous positive and useful features/benefits, the use of mobile applications in organizations also poses certain threats. In the context of this article, we ought to identify the (f)actors involved when using mobile business applications to identify potential threats. The actors identified are shown in Figure 1: the user, the mobile device, mobile application, the internet, firewall, the corporate network, webserver, and database.


Figure 1. Actors mobile business applications in a highly simplified figure. [Click on the image for a larger image]

The main threats taking the involved actors into account are discussed next.

Unprotected applications or networks

When network gateways are placed at easily accessible locations and include unpatched vulnerabilities, it may enable hackers to control the network gateways and intercept communications ([Enis17]). If so, it could lead to severe data privacy violations associated with the applicable GDPR legislation. Incidents may occur where sensitive organizational data is lost, which would have a significant negative impact on an organization’s reputation leading to financial damage. The implementation of the bring your own device (BYOD) policy at organizations acts as a complicating factor, as personal devices have diverse interfering applications installed.

Mobile security unawareness

Employees are considered to be the weakest link in information security ([Line07]). When there is limited awareness among employees regarding the security of mobile devices and the classification of data that is stored on devised and in the applications, this could bring about security incidents. For example, when sensitive organizational data is sent to personal non-managed devices by employees trying to meet a deadline. Most employees view security as a hindrance to their productivity. In case a way is found to work around security measures, there is a high chance that it will be used.

(Un)intended employee behavior

Employee behavior can have a substantial detrimental impact on the security of mobile business applications. When employees are insufficiently aware of the risks involved, their mobile activities could undermine the technical controls that organizations have in place to protect their data, such as incidents to consider, cases of malicious employees that have terminated, but that still have access to corporate applications on their mobile devices or cases where the mobile device is lost or stolen, etc. ([ISAC10]).

Access by external parties

External parties who have access to the network and systems of the organization are difficult to manage and monitor. Third parties are a known source of significant security breaches and are a target for hackers (as a steppingstone). This introduces more vulnerabilities and an increased risk to the corporate environment. These external parties usually require high privileged access to infrastructure and systems and therefore impose a high impact if those privileges are misused or compromised.

Risks related to the security of mobile business applications

The growing use of mobile devices within organizations has increased the threat level of IT security breaches, misuse of sensitive data and – as a result – reputational damage. Therefore, it is imperative that mobile business applications are subject to periodic audits performed by IT auditors.

In order to effectively perform such an IT audit, it is important to start with a risk assessment and to leverage a control framework designated for mobile business applications. To perform an IT audit effectively, the scope should only include the most relevant actors where organizations are able to influence and mitigate the risks involved. This leads to the following audit objects which are in scope for the designed risk/control framework:

  • User
  • Mobile device
  • Mobile (business) application
  • Corporate network

Considering the objects in scope and the threats identified in the previous section, a risk assessment is performed in which risks are evaluated which are deemed relevant to the subject of mobile business applications. These risks should be controlled by organizations when using mobile business applications as part of their operational processes and will serve as a basis for identifying related mitigating controls. The audit objects, the risks and the controls are combined in a risk/control framework which is included in this article.

Risk/control framework mobile business applications

The risk/control framework has been established by analyzing theory, identifying relevant objects and performing a risk assessment. The interlinkage of the appropriate controls, the risks, the control objectives and the relevant audit objects is depicted in Figure 2. The control objectives are desired conditions for an audit object which, if achieved, minimize the potential that the identified risk will occur. Hence, the control objectives are linked to the identified audit objects and are subject to the risks that threaten the effectiveness of the control objectives. The controls should mitigate the security risks to which mobile business applications are exposed.


Figure 2. Establishing the risk/control framework. [Click on the image for a larger image]

Resulting from the aforementioned described process of setting up a risk/control framework, Table 1 shows the risk/control framework that has been established.


Table 1. Risk/control framework mobile business applications. [Click on the image for a larger image]


Table 1 (continued). [Click on the image for a larger image]

Control objectives of the risk/control framework explained

The risk/control framework can be used by organizations in order to continuously test and update technical measures related to the usage of mobile business applications in order to safeguard the organization and its sensitive data.

Controls related to the technical setup of the mobile business application and a secure application life cycle management, control objective 1, are vital and need to be taken into account. These controls focus on tracking the risk of an application as it moves through the development lifecycle. Binary hardening, input validation and encryption complement a secure development environment to prevent vulnerabilities and unintended leakage of data from unprotected applications. These controls are even more relevant for mobile business applications as mobile devices are taken everywhere and most likely contain sensitive business data.

Securing the mobile business application by means of sufficient authentication and authorization configuration, control objective 2, 3 and 4, is relevant for the mobile business applications, but is highly dependent on how the IT infrastructure and the mobile environment has been set up. Even though it could be that part of the authentication logic is performed by the back-end server, it is important to consider this as integral part of the mobile architecture. Plainly, authentication and authorization problems are prevalent security vulnerabilities and rank second highest in the OWASP Top 10 ([OWAS17]) and are therefore included in the established risk/control framework.

In order to prevent unauthorized access and unintended leakage of corporate data, it is also vital to incorporate the right technical security measures to protect the mobile device (e.g. anti-virus, patch and vulnerabilities, separate corporate and private environment and data sharing). The controls, control objectives 6, 7 and 10, that are related to these technical measure are a very important part of the risk/control framework: it would trigger organizations to rethink their mobile architecture strategy, to be continuously aware of potential vulnerabilities and to implement and monitor the security measures.

In order to steer organizations in their strategy to control their mobile environment, controls related to MDM solutions are also incorporated. An MDM solution, control objective 8, will provide input to secure, manage and monitor corporate owned devices that access sensitive corporate data.

Cryptographic controls on mobile devices are almost inevitable. As mobile devices may contain sensitive corporate data, a secure manner of protecting data using cryptographic techniques is required; control objective 9. Cryptographic controls will provide added value to organizations for protecting their mobile data.

Mobile devices should be safeguarded from external threats. Therefore, when connecting to the corporate network, stringent security standards should be applied to protect corporate data; control objective 5 and 11. Controls related to these security standards will guide the organization in how to continuously monitor and test connectivity security between mobile devices and the corporate network.

A well-known saying in the world of IT security is that people are the weakest link in cyber security. We would like to reframe that as: people are the most important link in the cyber security chain. The risk/control framework incorporates “soft controls” – control objective 12 and 13 – related to users (people aspect) of mobile devices.

In conclusion, the risk/control framework consists of several sections that relate to:

  • secure development
  • technical setup
  • authentication & authorizations
  • cryptography
  • sessions handling
  • using MDM
  • data security
  • connections with corporate network
  • user awareness and
  • device / asset management.

Relevance of implementing a mobile security risk/control framework

In today’s world full of IT business transformations, it is important to implement controls for securing corporate data on mobile devices. Mobile business applications will continue to gather momentum in the coming years – despite the fact that the technical security level achieved will remain a concern. The risk/control framework as developed can be used in practice as a reference framework. As the mobile environment is rapidly evolving, there is a need to continuously test the security level and control the risks identified. Although there is an abundance of literature detailing specific technical security measures to be configured or implemented, these measures are usually not incorporated in (existing) risk/control frameworks. This framework is keeping abreast of new technology trends.

The aim of this article is to trigger organizations to think about these controls and how they can adjust this framework to make it applicable to their IT (mobile) environment and to incorporate these controls in their existing control framework that are most likely focused on regular ERP systems or IT General Controls. Since businesses and IT information chains have grown more sophisticated and complex, mobile applications become more prevalent at numerous large organizations.

Looking forward

As the Dutch National Cyber Security Centre ([NCSC18]) states in their mobile applications research; organizations are using these applications more and more in their daily business activities. The differences between the backend of mobile phones, tablets, laptops and desktops are diminishing, however. In the near future, the IT auditor needs to focus on end-to-end device and mobile security, which is not yet fully integrated in risk/control or audit frameworks or taught at post-master IT auditing studies.


This article is aimed at showing the need for and providing a risk/control framework. The risk/control framework we developed and validated can be used by organizations to continuously test and update technical measures in and around mobile business applications. This will safeguard their organization and its sensitive data. There is a number of significant controls an organization should have in place in order to securely use mobile business applications. This article demonstrates that several mobile application areas need to be addressed using security controls. These areas are secure development, technical setup, authentication & authorizations, cryptography, sessions handling, using MDM, data security, corporate network connectivity, user awareness and device / asset management. We do emphasize that how organizations control their mobile environment is also dependent on several factors. It is critical to perform an extensive risk assessment that addresses the business as well as technical aspects to identify the IT maturity of the organization, risk appetite and how the mobile infrastructure has been designed and configured. The risk/control framework can be adjusted according to this assessment, by verifying the controls that are applicable to the organization.


[Enis17] Enisa (2017). Privacy and data protection in mobile applications. Retrieved on June 22, 2018, from:

[Ham17] Ham, D. van, Iterson, P. van & Galen, R. van (2017). Mobile Landscape Security, Addressing security and privacy challenges in your mobile landscape. KPMG.

[ISAC10] ISACA (2010). Securing Mobile Devices.

[Line07] Lineberry, S. (2007). The Human Element: The Weakest Link in Information Security. Journal of Accountancy, 24(5), 44-46, 49.

[NCSC18] National Cyber Security Center (NCSC-NL) (2018). IT Security Guidelines for Mobile Apps. Den Haag.

[OWAS17] OWASP (2017). OWASP Top Ten. Retrieved on June 2018, from:

Digitale transformatie in audit en assurance

Het accountantsberoep en de jaarrekeningcontrole vereisen een toenemende digitalisering van het auditproces. Maatschappij, auditklanten en medewerkers verwachten dit ook. Digitale transformatie van de audit vergt echter meer dan handige tooling en slimme mensen. Dit artikel gaat in op de benodigde ingrediënten en succesfactoren.


De (voor)lezers van Rupsje Nooitgenoeg ([Carl69]) weten het al: transformatie kan alleen slagen met de juiste ingrediënten en een lange adem. Immers, het rupsje eet een week lang en ondergaat in haar cocon een transformatie tot vlinder die meer dan twee weken duurt. Snel en veel aanlokkelijke snacks eten om de transformatie te versnellen resulteren in buikpijn voor het rupsje.

Dit artikel gaat over de ingrediënten van digitale transformatie en innovatie binnen audit. Is dit artikel alleen gericht op (IT-)auditors? Zeker niet. Ook voor bestuurders of toezichthouders zijn er belangwekkende redenen om de digitale transformatie van de accountant én van andere stakeholders van hun organisatie beter te begrijpen.

Voor de meest toekomstbestendige relatie doen toezichthouders, bestuurders en andere beslissers er goed aan stil te staan bij de vraag aan welk ‘formule 1-team’ (zie kader ‘Help! Mijn accountant transformeert (niet)!’) zij zich verbinden, nu en in de toekomst.

Help! Mijn accountant transformeert (niet)!

Wat is nodig om de formule 1 te winnen? Niet alléén de beste coureur. Ook de snelste auto en de best getrainde pitcrew. Hoe bind je de beste coureur aan je? Door de snelste auto en de beste pitcrew te hebben. Hoe trek je de beste pitcrew aan? Door de beste coureur en de snelste auto te hebben. Et cetera. Dit is een klassieke catch 22 ([Hell61]).

In de accountancy doet zich hetzelfde fenomeen voor in de relatie tussen de interessantste auditklanten, het grootste talent om deze te bedienen en de beste technologie om dat talent te ondersteunen. In de huidige marktdynamiek van verplichte kantoorroulatie voor organisaties van openbaar belang (OOB’s) en de schaarste aan talent’ (zie kader ‘Digital native: met of zonder rode pen?’) zou investeren in technologie ter ondersteuning van talent weleens van levensbelang kunnen zijn voor accountantskantoren. Dit heeft niet in de laatste plaats ook betrekking op het aanbieden van (technische) trainingen en bevorderen van een innovatieve cultuur.

Digital native: met of zonder rode pen?

Professionals en burgers onder de veertig zijn in toenemende mate ‘digital natives’: zij zijn opgegroeid in het digitale tijdperk. Deze definitie gaat in eerste instantie over mensen, maar ook een groeiend aantal organisaties is ‘geboren’ in het digitale tijdperk, zonder digitale platformen waren hun businessmodellen ondenkbaar. Het digitale tijdperk met zijn digitale realiteit brengt nieuwe risico’s en kansen met zich mee waarmee ook de niet-digital-native organisatie, professional en burger te maken krijgen.

De digital-native afstudeerder die een carrière in accountancy overweegt, heeft in toenemende mate de keus tussen kantoren waar – bij wijze van spreken – nog de ‘rode pen’ wordt gehanteerd voor de controlewerkzaamheden en kantoren waar alle fases van de controle worden ondersteund door digitale oplossingen en men meer tijd besteedt met de auditklant, met het team en aan professionele ontwikkeling. Accountantsorganisaties zullen moeten blijven investeren om aantrekkelijk te blijven voor dit onmisbare talent.

Wat is digitale transformatie?

Digitale transformatie laat zich volgens ons het beste uitleggen als het inzetten van digitale technologieën om bestaande businessmodellen te veranderen (‘disruptie’) en aanvullende toegevoegde waarde te creëren. Het begrip ‘digitale transformatie’ is geen synoniem voor ‘digitaliseren’ van bestaande dienstverlening, aangezien bij het laatste niet direct toegevoegde waarde wordt gecreëerd. De auteurs constateren dat in het dagelijks taalgebruik deze termen echter vaak – ten onrechte –uitwisselbaar worden gebruikt.

Technologie en toekomst accountantsberoep

Gaan de initiatieven van accountantskantoren verder dan sec digitaliseren? Getuige het rapport [NBA19a] van de Nederlandse Beroepsorganisatie Accountants (NBA) is de beroepsgroep onder andere bezig met assuranceproposities buiten het traditionele financiële domein (zoals softwarerobots en algoritmen) en het verkennen van continuous auditing (het ‘continu’ controleren van de auditee in plaats van periodiek ten behoeve van de jaarlijkse jaarrekening). Dergelijke initiatieven beogen meer waarde toe te voegen én hebben een sterke digitale component. Wellicht is dit nog niet voor iedereen in de markt bewezen, maar het is wel degelijk te bestempelen als ‘digitale transformatie’.

Uit het voorlopige rapport [CTA19] van de Commissie Toekomst Accountancysector (CTA) blijkt dat vele accountantsorganisaties digitale technologie als een belangrijk onderwerp zien in de totale transformatie van het beroep: “Digitalisering, robotisering of breder: technologische ontwikkelingen zullen naar verwachting een niet onbelangrijke impact hebben op auditors en hun werk. Met bijvoorbeeld machine learning, artificial intelligence en data analytics zouden controlewerkzaamheden sneller, effectiever en met ‘minder handjes’ [zie kader ‘De accountant als alleskunner?’] kunnen worden uitgevoerd.” De CTA heeft mede geput uit het eerdergenoemde rapport van de NBA.

KPMG-onderzoek inzake businesstransformaties ([KPMG16a]) geeft aan dat organisaties in slechts 4  procent van de gevallen duurzame waardecreatie ervaren. Transformeren is moeilijk en digitale transformatie is nog moeilijker. Wij hebben er geen aanvullend onderzoek naar verricht, maar in een sector als accountancy, die (anekdotisch) als conservatief en risicomijdend te boek staat, heeft de digitale transformatie, die existentieel als doel heeft de status quo te doorbreken, het niet makkelijk met doing things differently, laat staan met doing different things ([NBA19b]). Ondanks deze statistieken zetten accountantsorganisaties en masse in op digitale transformatie.

De accountant als alleskunner?

De tijden dat accountants en auditors zich in kleine maatschappen verenigden en individueel over alle competenties beschikten om de controle van A tot Z in goede banen te leiden, liggen ver achter ons. Echter, als de controleopdracht het bouwen van een huis is, staat de accountant als timmerman nog immer aan het hoofd van de bouw, ondanks de aanzienlijk toegenomen rol van (andere) specialisten zoals metselaars, elektriciens en stukadoors. Een aannemer die de juiste specialisten betrekt en de timmermannen, stukadoors en elektriciens aanstuurt is nodig. Het is niet per se vanzelfsprekend dat dit de traditionele accountant in zijn rol als verslaggevingsspecialist is.

De CTA constateert in haar tussentijdse rapport inderdaad dat de kans bestaat dat er in de toekomst minder behoefte zal zijn aan omvangrijke controleteams bestaande uit auditors met oplopende ervaring in het beroep (meester-timmerman en leerling-timmermannen). Volgens de CTA is de kans groot dat specialisten zoals Big Data-specialisten, data-analisten, systeem-/robotics-auditors, IPE-/datakwaliteitsspecialisten of informatiebeveiligingsanalisten ook een belangrijkere rol krijgen in een controleteam ten koste van de meer traditionele financieel specialist. Het is de vraag of de accountantsopleiding en de bijbehorende eindtermen gebaat zijn bij meer technologie in de opleiding, waarbij de timmerman de troffel van de metselaar moet gaan hanteren, ten koste van ‘timmeren 2.0’, of dat men zich zou moeten concentreren op het inzetten van en samenwerken met specialisten uit aanpalende vakgebieden wanneer de specifieke situatie bij een auditklant hierom vraagt.

Ingrediënten voor digitale transformatie in audit

Jaarlijks verricht KPMG samen met Harvey Nash onderzoek onder CIO’s naar ‘technology leadership’ ([Harv19]). Uit het 2019-onderzoek blijkt dat 44 procent van de meer dan 3600 ondervraagde CIO’s aangeeft te verwachten dat hun organisatie en businessmodel binnen nu en 3 jaar een fundamentele transformatie zal ondergaan, gedreven door digitale disruptie en het toegenomen belang van klantfocus. Eerder al heeft KPMG gepubliceerd over hoe je succesvol kunt zijn tijdens disruptie ([KPMG16b]) en welke factoren die IT-transformaties ([KPMG16a]) een grotere slagingskans geven. Uit het CIO-onderzoek blijkt dat deze publicaties vandaag de dag nog steeds relevant zijn. Welke lering kunnen we trekken uit deze onderzoeken ten aanzien van de digitale transformatie in audit en assurance? Wat zijn de juiste ingrediënten?

1. Team en talent

Uit het CIO-onderzoek blijkt dat het tekort aan technologische kwaliteiten (zoals data analytics, cyber security en kunstmatige intelligentie) momenteel op recordhoogte is, en dat al sinds 2008. Grotere, oudere organisaties hebben meer moeite om talent aan zich te binden dan jonge, kleinere organisaties. Het talent van nu lijkt meer waarde te hechten aan innovatieve uitdagende projecten dan aan de relatieve zekerheid van een gevestigd merk. Grote organisaties, waar de meeste accountantsorganisaties ook onder vallen, zouden talent kunnen aantrekken én mogelijk kunnen behouden door corporate start-ups de ruimte te geven (zie paragraaf ‘De cocon’) en door op zoek te gaan naar (kleinere, jongere) partners in de markt die in staat zijn gebleken professionals met technologische vaardigheden aan zich te binden.

Accountantsorganisaties zullen structureel nieuw digitaal talent aan zich moeten leren binden (zie kader ‘De accountant als alleskunner?’). Het schetsen van een aantrekkelijk ontwikkelperspectief is hierbij cruciaal. Zwart-wit gezegd: wat heeft het beroep de digital native te bieden die ook bij Google aan de slag zou kunnen?

De nieuwe generatie talent heeft behoefte aan andersoortige leiders. Hierbij gaat het om het opnemen van nieuwe leiders in leiderschapsteams, het creëren van rollen rondom transformatie op bestuurs-/directieniveau en om het juiste voorbeeldgedrag van leiders op sleutelposities van de organisatie. Het benoemen van een ‘Chief Digital Officer’ is belangrijk maar nog niet genoeg; wanneer het leiderschap zich zichtbaar committeert en andere leiders in de organisatie het juiste voorbeeldgedrag vertonen, zal de digitale transformatie een grotere kans van slagen hebben.

2. Continue innovatie

Het moge inmiddels duidelijk zijn dat digitale transformatie maar voor een beperkt deel over technologie zelf gaat. Net als bij iedere andere transformatie zijn het cultuur- en gedragsveranderingen die een groot deel van het succes bepalen. Aan werknemers de gelegenheid bieden om bestaande werkwijzen ter discussie te stellen en hen motiveren om nieuwe ideeën te delen past hierbij. Delen alleen is echter niet genoeg: laat medewerkers experimenteren met deze ideeën in een afgeschermde proeftuin (een zogenoemde ‘sandbox’, zie ook de paragraaf ‘De cocon’) en leer van geslaagde én minder of niet geslaagde experimenten. In de digitale transformatie is het belangrijk de rollen en verantwoordelijkheden af te stemmen op de benodigde transformatieactiviteiten. Voor accountantsorganisaties die nog grotendeels in een traditionele uurtje-factuurtjewereld opereren en met name sturen op declarabele uren, betekent dit een aanzienlijke verandering. In een compliance-gedreven organisatie (‘fouten maken mag niet’) is het de uitdaging duidelijk afgebakende omgevingen te creëren waar fouten maken wel mag en deze tegelijkertijd toegankelijk te laten zijn. De kunst is om het moment waarop je collega na afloop van een uitdagende controle verzucht dat jullie het volgend jaar helemaal anders gaan doen, te vangen en in te zetten om zoveel mogelijk van te leren door de gehele organisatie heen.

3. Start small, think big

Is pas sprake van digitale transformatie bij auditors als deze op ieder gewenst moment iedere externe belanghebbende met een combinatie van continuous auditing, blockchain, chatbot, gebruikmakend van haar data lake en kunstmatige intelligentie kan informeren over continuïteitsrisico’s (‘going-concern’) bij een organisatie? (Om maar eens een aantal hypes te noemen …) Het antwoord op deze retorische vraag is uiteraard nee.

Digitale transformatie kan veel dichter bij huis beginnen, gewoon in de dagelijkse praktijk. Bijvoorbeeld door het ontsluiten van (interne) informatie op digitale platformen, waar iedereen toegang toe heeft op het juiste moment. Voor de auditors onder de lezers het volgende experiment: hoe vaak op een dag leg je op verschillende plekken dezelfde informatie vast over dezelfde organisatie of auditopdracht en hoe vaak moet je uit verschillende bronnen informatie ophalen over deze werkzaamheden? Waarschijnlijk is het antwoord ‘meer dan één keer’, wat leidt tot foutgevoeligheid en ‘waste’. De ‘dagelijkse praktijk’ heeft nog een ander element in zich: de transformatie vindt pas echt plaats als technologie op grote schaal inzetbaar is en daadwerkelijk wordt benut. De inzetbaarheid kan vergroot worden door tooling te ontwikkelen die voor het bedienen niet afhankelijk is van schaarse specialisten. ‘Power to the people’ dus! Een belangrijke brugfunctie is hierbij weggelegd voor hen die de nieuwe technologische middelen succesvol kunnen toepassen in de bestaande bedrijfsprocessen en die de digital natives kunnen verbinden met de non-digital natives.

4. Communiceer de veranderagenda luid en duidelijk

Het begrijpen van het ‘waarom’ van de digitale transformatie is bepalend voor het succes hiervan. Nietzsche schreef reeds ([Niet89]): “Hat man sein warum? des Lebens, so verträgt man sich fast mit jedem wie?” Simon Sinek was bekend met deze spreuk uit 1889 bij het vaststellen van zijn ‘golden circle’ bestaande uit ‘Why, How, What’. De boodschap uit deze spreuk is niet dat het doel de middelen heiligt, maar dat men met het doel voor ogen de ups en ook de onvermijdelijke downs kan zien in de context van de gehele transitie. Schets als bestuur de stip aan de horizon en waarom het bereiken hiervan belangrijk is. Leiders in marktsegmenten, kantoren en units moeten dit concreet vertalen naar hun eigen groepen.

Het delen van succesverhalen (de ups) is cruciaal; ook de juiste lessen uit de downs zijn het zeker waard om te communiceren. Het vinden van de juiste communicatiekanalen vraagt nog wel aandacht: waar men al overspoeld wordt met allerlei e-mails, werkt een ander communicatiemiddel waarschijnlijk beter voor het trekken van de aandacht van drukbezette medewerkers. In onze eigen praktijk zijn wij begonnen met een WhatsApp-groep: collega’s kunnen zich met een QR-code aanmelden voor de groep. In de groep delen we belangrijke ontwikkelingen op het gebied van de digitale transformatie en plaatsen we ‘calls to action’. Op dit moment is dit voor ons een communicatiemiddel dat onderscheidend is, maar men moet op zoek blijven gaan naar nieuwe manieren om op te vallen.

De cocon

Als de strategische ingrediënten aanwezig zijn, hoe kan dan de digitale transformatie daadwerkelijk vorm krijgen? Hierboven kwam al de sandbox aan bod, waar men mag experimenteren en fouten mag maken. Hoe kan spelen in de zandbak ideeën omzetten in waarde en niet slechts leiden tot veredeld hobbyisme of luchtkastelen?

In de introductie van de The Startup Way ([Ries17]) dringt zich deze vraag op: als op een willekeurige plek in een organisatie een briljant idee ontstaat dat enorme groei zou creëren, heeft deze organisatie dan de processen en instrumenten om dit idee met maximale impact te effectueren? Een manier om deze vraag bevestigend te kunnen beantwoorden is het incorporeren van managementtechnieken uit de start-upwereld in zogenoemde corporate start-ups. Dit zijn groepen binnen gevestigde organisaties die zich als start-up ‘mogen’ gedragen en zich bezighouden met innovatie.

Wat maakt de corporate start-up anders dan andere initiatieven in organisaties? Volgens [Blan10] onderscheidt een start-up zich doordat een start-up een tijdelijke organisatie is die tot doel heeft een herhaalbaar en schaalbaar businessmodel te ontdekken. Dit staat in contrast met permanente organisaties die tot doel hebben een herhaalbaar en schaalbaar businessmodel succesvol uit te voeren. Dit verschil is fundamenteel en dit niet onderkennen leidt tot ‘buikpijn’: wanneer een start-up een schaalbaar nieuw businessmodel heeft ontwikkeld en getest, rijst de vraag of de start-up het businessmodel verder moet industrialiseren en moet verworden tot ‘bedrijf’ óf dat de start-up dit succesvol geteste businessmodel moet overdragen aan de lijnorganisatie en het volgende nieuwe schaalbare businessmodel moet gaan ontdekken. De start-up waar initieel grote innovatieve kracht van uitging, zal hierop inboeten als deze verwordt tot een permanente lijnorganisatie met uitvoeringsverantwoordelijkheid. Haal de vlinder uit de cocon, in plaats van de cocon te voorzien van pootjes en vleugels.


Figuur 1. Gebaseerd op de Lean Startup-ideeën ([Ries11], [Ries17]). [Klik op de afbeelding voor een grotere afbeelding]

Hoe kan een start-up nieuwe businessmodellen ontdekken voor de digitale audittransformatie? Bijvoorbeeld door het volgen van de ‘build-measure-learn-feedbacklus ([Ries11]). Dit begint bij een idee. Geen enkel idee is bij voorbaat al bewezen briljant. Om een idee zich te laten bewijzen moet het geformuleerd worden als een testbare, falsificeerbare hypothese/aanname, in beginsel rondom de vraag of waarde wordt gecreëerd voor een te auditen organisatie. Stel dat een team het idee opvat om organisaties van extra informatie te voorzien over de voortgang van een audit via een webgebaseerd platform. Een testbare aanname kan dan zijn: ‘Controleklanten hechten waarde aan aanvullende inzichten in de voortgang van de controle.’ Op basis van deze aanname bouwt men een zogenoemd Minimal Viable Product (MVP). Zo’n MVP moet erop geënt zijn om ‘validated learning’ van gebruikers c.q. auditklanten te verkrijgen. Dit wil zeggen dat men de effecten van het MVP meet en op basis van deze meetdata leert hoe het idee verder kan worden aangescherpt. Hiermee begint de cyclus opnieuw. Uit het voorbeeld valt af te leiden dat men niet hoeft te beginnen met grote technologische investeringen in een webgebaseerd platform met allerlei functionaliteiten. Als men leert dat het idee of de aanname inderdaad gefalsificeerd is, dan wordt het idee verworpen.

Wat leidt tot buikpijn?

Nadat Rupsje Nooitgenoeg groene blaadjes en fruit heeft gegeten, schakelt zij in het weekend plotseling om op taart, ijs en andere snacks. Deze zijn niet bevorderlijk voor het rupsje en veroorzaken buikpijn. Wij beschrijven hier drie ingrediënten die in de praktijk kunnen leiden tot digitale buikpijn.

Technologie als doel

Komt een nieuwe technologie beschikbaar en lijkt die hét ei van Columbus? In dat geval is het goed nog eens scherp te kijken naar welke problemen deze technologie nu echt kan oplossen en hoe deze waarde creëert. Heeft de technologische innovatie wellicht ook nadelen of een andere impact dan initieel voorzien? Hooggespannen verwachtingen in combinatie met technologische beloftes die niet nagekomen (kunnen) worden, werken demotiverend.

Oplossing voor een te smal gedefinieerd probleem

Wanneer men goed naar de gebruiker luistert, zit er potentieel nog een addertje onder het gras. Het op te lossen probleem kan een veel bredere achterliggende oorzaak hebben die, als niet geadresseerd, de groei van het succes kan blokkeren. Digitale transformatie werkt standaardisatie in de hand en businessmodellen en processen die standaard zijn, zijn een geschikte voedingsbodem voor digitalisering. Wanneer enkele standaardprocesstappen worden gedigitaliseerd, kan dit opeens bottlenecks op andere plekken in het proces blootleggen. Als deze activiteiten of belemmeringen zich minder goed of niet lenen voor digitalisering, is het van belang deze op een andere manier adequaat te adresseren.

Zeer klantgericht, weinig herhaalbaar en schaalbaar

Iedere praktijk kent opdrachten waar de toepassing van technologie in de controle bijzonder succesvol is geweest. Successen die tot stand zijn gekomen bij grote opdrachten met bijbehorende technologiebudgetten, dienen grondig geanalyseerd te worden op herhaalbaarheid en schaalbaarheid. Sommige oplossingen laten zich lastig vertalen naar een bredere doelgroep waar de context, organisatieomvang, processtandaardisatie, datakwaliteit en cultuur afwijkend zijn.

Systeemgericht versus gegevensgericht

Het NBA-rapport ([NBA19a]) stelt dat een van de gevolgen van de oprukkende technologie is dat auditors minder gaan steunen op de controles en procedures van hun auditklanten, maar meer op basis van digitale data eigen waarnemingen kunnen doen. De CTA stelt echter in haar aanbevelingen dat de primaire verantwoordelijkheid voor de interne beheersing bij de gecontroleerde organisatie zelf ligt. Om dit tot uitdrukking te brengen zou het bestuur van de organisatie een ‘in control statement’ kunnen afgeven op basis van een intern beheersingsraamwerk, te controleren door de accountant.

Ogenschijnlijk zijn dit twee conflicterende toekomstvisies: een verschuiving naar wat auditors een ‘gegevensgerichte’ aanpak noemen en een verschuiving naar een ‘systeemgerichte’ aanpak. Een systeemgerichte aanpak van de controle in een geautomatiseerde omgeving kan complex zijn, omdat deze aanzienlijke eisen stelt aan deze beheersingsomgeving. Dit geldt voor individuele IT-applicatiecontroles, maar ook voor het gehele stelsel van beheersingsmaatregelen.

Wanneer we ‘gegevensgericht’ niet beschouwen als synoniem voor ‘deelwaarneming’ (bijvoorbeeld het doen van een statistische steekproef), worden de twee aanpakken weer congruent. De inzet van data-analyses bij eigen waarnemingen op basis van digitale data geeft de accountant – onafhankelijk van de volwassenheid van de IT-omgeving van de betreffende organisatie – de instrumenten om tegen aanvaardbare kosten conclusies te baseren op de gehele populatie van een bepaalde transactiestroom of een bepaald transactiesaldo ([NVCOS]).

Hoewel de accountant de resultaten van de op deze wijze verkregen controle-informatie onder huidige nadere voorschriften controle- en overige standaarden (NV COS; [NBA17]) niet mag aanwenden om een uitspraak te doen over de kwaliteit van de interne beheersing (is het bestuur ‘in control’?) die op de transacties of standen van toepassing is geweest, bieden de data-analyseresultaten wel relevante inzichten in deze thematiek.

De accountant kan vanuit de natuurlijke adviesfunctie deze inzichten met zijn/haar auditklant delen zodat deze ‘in control’ kan komen, kan zijn (vast te stellen door de systeemgerichte controle) en kan blijven (straks: ‘continuous auditing’). Het door de CTA aanbevolen ‘in control’ zijn en dit op basis van systeemgerichte aanpak vaststellen komt op deze wijze stapsgewijs en concreet binnen handbereik.


Accountantsorganisaties bevinden zich in een complexe omgeving: met regelmaat worden uit verschillende richtingen verwachtingen uitgesproken over de huidige en toekomstige rol en het werk van de accountant. De toch al aanzienlijke verwachtingskloof heeft grillige rotswanden. Bruggen slaan blijft essentieel voor de maatschappelijke relevantie van het vakgebied. Structurele investeringen in de vernieuwing van het vak, zoals bijvoorbeeld de Commissie Toekomst Accountantsberoep beoogt, helpen hierbij. Accountantsorganisaties kunnen zichzelf helpen door open te staan voor de digital-native professional, auditklant en stakeholder. Tevens kunnen zij talent aantrekken door te kijken naar wat jonge, kleinere organisaties succesvol maakt en de juiste partners in het ecosysteem vinden.

De digitale transformatie van het beroep is geen eenvoudige zaak. Accountantsorganisaties kunnen door digitaal te transformeren, lessen leren die nuttig kunnen zijn in andere gebieden die transformatie behoeven. Zij die een accountant hebben of ernaar op zoek zijn, kunnen door de juiste vragen te stellen en concrete voorbeelden te zien, meer te weten komen over hoe toekomstgericht de betreffende accountant echt is.

Rupsje Nooitgenoeg zweert snacks af, eet weer gezond, gaat haar cocon in en komt daar als prachtige vlinder uit. Organisaties zullen keer op keer moeten transformeren om succesvol te blijven.


[Blan10] Blank, S. (2010). What’s a Start-up? First principles. Geraadpleegd op:

[Carl69] Carle, E. (1969). The very hungry Caterpillar.

[CTA19] Commissie Toekomst Accountancysector (2019). Voorlopige bevindingen Commissie Toekomst Accountancysector. Geraadpleegd op:

[Hell61] Heller, J. (1961). Catch-22.

[KPMG16] KPMG (2016). Succeeding in disruptive times, Three critical factors for business transformation success. KPMG Global Transformation Study. Geraadpleegd op:

[KPMG16b] KPMG (2016). 11 lessons for IT Transformations. Geraadpleegd op:

[Harv19] Harvey Nash & KPMG (2019). CIO Survey 2019, A changing perspective. Geraadpleegd op:

[NBA17] NBA (2017). Handleiding Regelgeving Accountancy, Controle- en overige standaarden, 500 Controle-informatie, paragraaf A54. Geraadpleegd op:

[NBA19a] NBA Stuurgroep Publiek Belang (2019). Stuurgroep: duidelijke tempoverschillen bij inzet technologie in auditpraktijk. Geraadpleegd op:

[NBA19b] NBA (2019). Handreiking 1141, Data-analyse bij de controle. Geraadpleegd op:

[Niet89] Nietzsche, F. (1889). Götzen-Dämmerung, Sprüche und Pfeile 12.

[NVCOS] Nadere voorschriften controle- en overige standaarden (NV COS) 500 A54.

[Ries11] Ries, E. (2011). The Lean Startup.

[Ries17] Ries, E. (2017). The Startup Way.

Digitalisering van de audit

Technologische ontwikkelingen en digitalisering hebben een steeds grotere impact op de dagelijkse gang van zaken bij organisaties. Deze ontwikkelingen beïnvloeden de controleaanpak van de accountant en IT-auditor, bij een gedegen aanpak zou dit meer positieve dan negatieve effecten moeten gaan hebben. Tegelijkertijd hebben de accountant en IT-auditor steeds meer digitale middelen tot hun beschikking om de accountantscontrole van een entiteit uit te voeren. Wij spraken met professor Egon Berghout, over deze ontwikkelingen en de gevolgen voor de toekomst op de controleaanpak.

Wat is de impact van de digitalisering op de audit?

De steeds verdere technologische ontwikkeling die door organisaties gebruikt wordt, heeft onder meer impact op de manier waarop een accountantscontrole wordt uitgevoerd. Het doel van de controle verandert immers niet, maar wel de samenstelling van de controlemiddelen, de zogenaamde coëfficiënten van het vak.

Een voorbeeld van hoe coëfficiënten invloed hebben op business is te vinden in de retailsector. De essentie van die sector is in de loop van tientallen jaren niet veranderd: verkoop van artikelen aan consumenten. De manier waarop die verkoop plaatsvindt is wel degelijk gewijzigd; enerzijds betreft dit de toenemende schaalgrootte voor het realiseren van een succes­volle en efficiënte formule (met uitzondering van nichemarkten), anderzijds de transitie van traditionele winkels naar steeds meer online winkels.

Een ideale audit = geen audit (meer nodig hebben), maar dat is eigenlijk geen realistisch vooruitzicht.

Welke elementen hebben invloed op de technologische veranderingen in de audit?

Er zijn verschillende elementen die met elkaar verband houden en tegelijk elkaar in evenwicht houden. Er is sprake van schaalvoordelen, net als in de retail. Daarnaast zie je een toenemende complexiteit als tegen­kracht. Deze complicerende factor wordt veroorzaakt door het aantal systemen, het aantal koppelingen tussen deze systemen en de afhankelijkheid van deze twee op de dagelijkse operatie van organisaties.

De huidige wijze van scopen zou eens goed tegen het licht moeten worden gehouden. Nu wordt nog uitgegaan van posten, processen en applicaties. Het uiteindelijke object van onderzoek in het kader van de financial audit is een subset van de informatiesystemen die een organisatie gebruikt, waarbij er overwegend van uit wordt gegaan dat deze in isolatie kunnen worden bekeken en beoordeeld. Bij het uitvoeren van technische beveiligingstesten (zoals een penetratietest) blijkt soms dat op eenvoudige wijze vanuit een systeem dat niet tot de auditscope behoort, ook toegang kan worden verkregen tot de data die wel van belang zijn voor de financial audit.

Voor complexe en expertmatige activiteiten zoals risicoanalyses, het plannen, schattings­posten, rapportages en dergelijke kunnen ook digitale toepassingen worden gebruikt, bijvoorbeeld voor het waarderen van verplichtingen of voorraden. Een voorbeeld is een toepassing, gebaseerd op artificial intelligence (AI), die door KPMG is ontwikkeld om de verplichting van een bedrijf te berekenen, denk hierbij aan uitstaande frequent flyer (air)miles of cadeaubonnen.

Wat zijn de gevolgen van de toenemende complexiteit voor de accountant en IT-auditor?

De toenemende complexiteit heeft invloed op de risico’s voor de entiteit en dus ook voor de controleaanpak van de controlerend accountant en IT-auditor. Ondanks de toenemende interne beheersing bij organisaties zien we dat door de koppelingen van de verschillende systemen – op termijn – sprake kan zijn van toenemende risico’s. Het belang van controle door een onafhankelijke en kritische partij, zoals de accountant of IT-auditor, neemt daardoor toe, zowel voor het maatschappelijk verkeer en beleggers als voor het management en toezichthouders.

Zeker bij partijen die primair afhankelijk zijn van hun IT-omgeving, kan sprake zijn van een continuïteitsrisico als deze IT-omgeving niet functioneert. Zo zagen we recent dat het tanksysteem op Schiphol niet werkte en dat daardoor op Schiphol geen vliegtuigen meer konden vertrekken.

Om de risico’s die gepaard gaan met de toenemende digitalisering en complexiteit af te dekken, is mijn verwachting dat in de uitvoering de mix van controlemiddelen en -werkzaamheden verandert. Er zal meer gesteund gaan worden op application controls en de daarmee samenhangende IT general controls. Het lijkt mij onmogelijk dat de accountant nog om deze IT general controls heen loopt, ook al is er nog weinig gesteund op application controls. Je ziet echter dat dit toch nog vaak gebeurt.

De application controls hebben daarbij onder meer als doel om de kwaliteit van de data, controletechnische functiescheidingen of betrouwbare verwerking te waarborgen. Immers, als de data van onvoldoende kwaliteit zijn, dan heeft dat gevolgen voor alle vervolgstappen, zowel binnen een entiteit als voor de controle. En in het geval van de koppelingen van verschillende systemen is het extra van belang dat de data van goede kwaliteit zijn.

Naast het steunen op application controls zal de accountant gegevensgerichte controlemiddelen, zoals data-analyse, steekproeven en eventueel process mining, gebruiken om de data en procesverloop te controleren. Een steekproef kan daarbij in omvang variëren van enkele items tot de hele populatie, afhankelijk van de mate van uniformiteit.

Met krachtige data-analysetooling en AI-hulpmiddelen kun je vaak de gehele populatie analyseren. Bij data-analyse gaat het veelal om het vinden van uitzonderingen en niet om het verkrijgen van zekerheid, terwijl een steekproef wel zekerheid kan geven (zie ook de recente NBA-handreiking [NBA19]).

Onderzoek wijst uit dat de omvang van het aantal ‘false positives/negatives’ vaak het gevolg is van een ‘onjuiste’ aanpak, omdat er onvoldoende kennis is over hoe de processen nu feitelijk lopen en welke data worden gebruikt. Als de accountant over voldoende kennis van processen en data zou beschikken, zou hij/zij een beter gerichte data-analyse kunnen uitvoeren.

Heeft de toegenomen complexiteit invloed op de verhouding tussen een systeemgerichte en gegevensgerichte controlemix?

De geïdentificeerde risico’s en het daarop afstemmen van de juiste mix aan werkzaamheden staan centraal. Tot op heden schrijven de standaarden een combinatie van systeem- en gegevensgerichte werkzaamheden voor of, als systeemgericht niet kan of ondoelmatig is, alleen gegevensgerichte werkzaamheden.

ESAA-studenten hebben onderzoek gedaan naar de kwaliteits- en efficiëntievoordelen van een IT-gedreven controleaanpak. Ook uit andere onderzoeken blijkt dat het toepassen van data-analyse de audit niet goedkoper maakt, maar vaak wel meer toegevoegde waarde voor de auditklant biedt, omdat deze meer inzicht krijgt in het verloop van zijn processen en waar deze kunnen worden verbeterd. Hierdoor kan ook een verbetering van de interne beheersing worden gerealiseerd. Ook blijkt dat de accountant door het toepassen van vormen van data-analyse een beter inzicht krijgt in de processen en data bij zijn klant. Na de initiële investering in het opzetten van vormen van data-analyse, zullen de toekomstige kosten voor het uitvoeren daarvan dalen, waardoor er op termijn wel besparingen kunnen worden bereikt.

In hoeverre komt er continuous audit in een volledig digitale omgeving?

Nu data steeds ruimer en eenvoudiger beschikbaar zijn, lijkt het niet ondenkbaar dat in de nabije toekomst diverse informatiesystemen kunnen worden voorzien van een module die continu data aftapt en deelt met een accountant of IT-auditor. Deze module zou ook door de (ERP-)leverancier ontwikkeld kunnen worden. Je ziet al eerste, beperkte voorbeelden van dergelijke modules. Hopelijk gaat dit sneller dan met de introductie van SBR/XBRL in Nederland.

In de praktijk blijkt de concrete bijdrage van deze continue datastroom nog beperkt voor zowel accountant als ondernemer. De toegevoegde waarde van een continue datastroom wordt door ondernemers veelal nog niet gezien. De gedachte achter de ontwikkeling dat data ruimer en eenvoudiger beschikbaar zijn, is dat het geven van meer en continue openheid in de onderneming goed is voor meerdere partijen.

Er zijn echter verschillende belangen bij het geven van openheid. Vanuit het maatschappelijk verkeer is het beeld vaak dat entiteiten ernaar moeten streven zo transparant mogelijk te zijn over de resultaten en dat bij voorkeur 24 uur per dag. Een eigenaar of ondernemer kan echter een belang hebben om niet direct resultaten met iedereen te delen, bijvoorbeeld uit concurrentie­overwegingen. Kijk bijvoorbeeld naar hoe weinig bedrijven in hun jaarverslag concreet zijn over verschillende typen incidenten (zoals een cyberaanval of datalek). Hierbij speelt ook de problematiek van afgrenzing en waardering. Daarom is te verwachten dat continuous monitoring voor het management een snellere adoptie zal krijgen dan continuous auditing door de accountant.

Welke ontwikkeling is zichtbaar bij accountantskantoren?

Steeds meer besturen van accountantskantoren onderkennen het belang van digitale technieken tijdens de controle. Er zijn veel opdrachtteams waar een mix van accountants en IT-auditors samen optrekt om de geïdentificeerde risico’s af te dekken. Onze ervaring met studenten leert dat er nog wel een gat is tussen de theorie, het verhaal en de praktijk.

De toenemende complexiteit kan betekenen dat niet veel accountantskantoren de benodigde investeringen in financiën, mensen en hulpmiddelen kunnen opbrengen om auditsituaties met complexe IT geheel te doorgronden.

Hoe ligt de verhouding tussen kosten en baten voor accountants die een digitale aanpak hanteren?

Ontwikkeling van nieuwe methoden en technieken kost tijd en zal veelal het meest efficiënt zijn als deze ingezet kunnen worden in meerdere opdrachten en over meerdere jaren. Gezamenlijke ontwikkeling van nieuwe technieken loont, bijvoorbeeld door deze als beroepsgroep te ontwikkelen. Schaalgrootte loont zeker.

Het juiste moment van investeren is een lastig vraagstuk. De controleaanpak voor het huidige boekjaar is redelijk goed te bepalen, maar de aanpak van volgend jaar bepalen is alweer een stuk lastiger. Iedere controleaanpak is nog vrij organisatie- en situatiespecifiek; daardoor is moeilijk voorspelbaar wanneer en hoe precies een efficiënte overgang naar een systeemgerichte controleaanpak verantwoord mogelijk is.

Wat is de voornaamste drijfveer achter de verandering van de controleaanpak?

De verandering komt deels voort uit de techniek; die dicteert de snelheid van de verandering. De mede door digitalisering gewijzigde organisatievormen en/of processen bij organisaties hebben ook sterke invloed op de controleaanpak. Twintig jaar geleden stond er een klaslokaal vol met computers. De rekenkracht die we toen hadden, zit nu in je mobiele telefoon. Een ander voorbeeld is de zelfrijdende auto; het is zeker dat die er komt, de ontwikkeling van de techniek en faciliterende wetgeving bepalen wanneer. Schumpeters theorie gaf dergelijke processen destijds al aan.

De continue verandering maakt investeren lastiger, maar niet onmogelijk. Wel zijn accountants net als andere beroepsgroepen, zoals rechters en medici, terug­houdend met grootscheepse, gestandaardiseerde digitale veranderingen. Ze zijn niet zo opgeleid en hebben minder affiniteit met innovatie en digitalisering.

Welke ontwikkeling is zichtbaar in de postdoctorale accountancy- en IT-auditopleiding?

Het belang van IT-systemen wordt steeds verder geïntegreerd in de opleidingen. Ook bij ESAA is het vak Bestuurlijke informatievoorziening (BIV) & Administratieve organisatie (AO) steeds meer IT-gedreven. Ook onze auditingvakken passen wij continu aan nieuwe technische mogelijkheden. We werken in ons onderwijs samen met studenten accountancy en internal auditors. IT-auditors moeten leren samenwerken.

Accountants van de toekomst moeten sterke affiniteit hebben met IT om de juiste controlemix te bepalen bij de gesignaleerde risico’s. De opleidingen zullen zich daarom steeds meer hierop gaan richten. In feite zou je het liefste willen dat iedere RA ook deels RE is.

Waarom heeft die ontwikkeling van IT in accountancyopleidingen zo lang geduurd?

De integratie van IT in de accountantsopleiding verloopt trager dan gewenst. Er zijn meerdere redenen. Ten eerste: de accountant heeft van nature weinig affiniteit met IT. Dit geldt zelfs nu nog voor de huidige generatie studenten. Ze gebruiken IT-middelen, maar zijn niet geïnteresseerd in hoe het werkt en wat de impact is op processen en interne beheersing. Ten tweede: de ontwikkelingen gaan zo snel dat ze niet door de opleidingen kunnen worden bijgehouden. Ten derde: het opleidingsprogramma is al vol en andere vakken zijn niet bereid ruimte te maken voor IT-modules. Ten vierde: een deel van de huidige docenten heeft ook niet zoveel affiniteit met IT.

Welke rol spelen beroepsorganisaties?

Beroepsorganisaties kunnen een faciliterende rol vervullen en mede zorgen voor de schaalgrootte die ik eerder benoemde. De jaarrekeningcontrole is natuurlijk geen uniek product, dus ze zouden verder met integratie van IT in opleidingen en handreikingen kunnen gaan dan tot op heden is gebeurd.

NBA heeft het initiatief ‘Accounttech’ gestart om de mogelijkheden van IT beter onder de aandacht te brengen van accountants en hen te ondersteunen bij het gebruik daarvan.

Over prof. dr. E.W. Berghout RE CISA

Egon Berghout is Wetenschappelijk Directeur IT Auditing & Advisory bij de Erasmus School of Accounting & Assurance (ESAA). Daarnaast is hij hoogleraar bestuurlijke informatiekunde aan de Rijksuniversiteit Groningen en managing partner van het Information Management Institute in Rotterdam. Hij is werkzaam geweest bij Philips, M&I/Partners, de Technische Universiteit Delft en de London School of Economics and Political Science.


[NBA19] NBA (2019). NBA-handreiking 1141: Data-analyse bij de controle. Geraadpleegd op:

De impact van Robotic Process Automation op de audit

Het toepassen van nieuwe technologieën gericht op verbetering en automatisering van bedrijfsprocessen, neemt in hoog tempo toe. Eind 2019 hebben de meeste corporate en financiële bedrijven reeds Robotic Process Automation (RPA) geïmplementeerd, waarmee handmatige en vaak hoog repeterende activiteiten van medewerkers worden gerobotiseerd. Het implementeren van nieuwe technologieën zoals RPA brengt ook specifieke risico’s met zich mee, zowel vanuit een business- als IT-perspectief. In dit artikel lichten wij toe wat de impact is van RPA op risicomanagement en het auditen van gerobotiseerde processen.

De opkomst van RPA

Vandaag de dag investeren organisaties steeds vaker in nieuwe technologieën zoals RPA, natural language processing (NLP), machine learning (ML) en artificial intelligence (AI). Deze nieuwe manier van automatiseren, met als doel effectievere en efficiëntere bedrijfsprocessen, betere customer experience en kostenbesparingen, heeft grote voordelen laten zien voor organisaties. Softwarerobots zijn flexibel in het uitvoeren van taken (24/7), maken geen menselijke fouten die ontstaan door vermoeidheid of onoplettendheid en kunnen bijdragen aan meer gestandaardiseerde bedrijfsprocessen met minder uitzonderingen.

Wat zijn de voordelen van RPA?

Enkele voordelen van RPA zijn:

  • RPA-implementaties zorgen ervoor dat bedrijfsprocessen opnieuw tegen het licht worden gehouden en dat resulteert in minder uitzonderingen door standaardisatie en versnelt ook de uitvoering van processen.
  • RPA verhoogt de kwaliteit van de uitvoering van processen doordat robots zorgvuldiger en systematischer te werk gaan dan normale medewerkers.
  • RPA is een schaalbare oplossing die het werven van nieuwe fte’s reduceert en/of voorkomt. Nieuwe fte’s zijn in de markt soms niet gemakkelijk te werven, zoals blijkt uit het tekort aan beschikbaar personeel voor bijvoorbeeld Know-Your Customer (KYC)- en Anti Money Laundering (AML)-processen ([Boer19]).
  • RPA zorgt dat de huidige werknemers meer tijd overhebben om waardetoevoegende activiteiten uit te voeren. Investeringskosten en terugverdientijd zijn relatief laag in vergelijking met traditionele automatiseringsprojecten, resulterend in een aantrekkelijke business case.
  • Ook op het gebied van compliance en control heeft het toepassen van RPA zijn voordelen. Alle keuzes en activiteiten die de robot uitvoert, worden gelogd en hiermee kan het bijhouden van een juiste en volledige audit trail tot in detail worden gefaciliteerd.
  • Tot slot is het vanwege de veel grotere beschikbare capaciteit mogelijk om softwarerobots meer en uitgebreidere controles te laten uitvoeren in vergelijking met de beperkte beschikbaarheid en capaciteit van huidige medewerkers. Dit resulteert in een veel grotere scope aan auditwerkzaamheden.

Naast het toepassen van RPA hebben veel bedrijven inmiddels de volgende stap gezet in het verbeteren van bedrijfsprocessen. Hiervoor is slimmere technologie nodig die bijvoorbeeld in staat is ongestructureerde data te verwerken (zoals gesproken tekst met NLP) en zelfstandig beslissingen kan nemen op basis van eerdere transacties en ontvangen feedback (ML en AI). Zie figuur 1 voor een overzicht van de verschillende typen robotiseringsoplossingen met ieder een eigen risicoprofiel. Organisaties zijn druk bezig met het investeren in meer cognitieve technologieën, waardoor uiteindelijk meer processen geschikt zijn voor procesverbetering.


Figuur 1. Drie verschillende vormen van robotisering. [Klik op de afbeelding voor een grotere afbeelding]

Hoe wordt RPA toegepast?

RPA wordt als toepassing vaak gebruikt voor handmatige activiteiten binnen processen met een repeterend karakter en waarbij hoge volumes worden verwerkt. Het is daarom niet onlogisch dat RPA zijn oorsprong kent in de backofficefunctie van grote internationale bedrijven. Inmiddels hebben ook andere afdelingen de voordelen van RPA ervaren en wordt RPA ingezet binnen meerdere organisatieonderdelen. Voorwaarde voor het toepassen van RPA is dat het proces op regels is gebaseerd en dat gebruik wordt gemaakt van gestructureerde data. Gerobotiseerde processen komen bijvoorbeeld voor in de Finance-, HR-, Inkoop- en IT-functie van een bedrijf. Ook binnen afdelingen zoals Supply Chain, Master Data Management ([Hend19]), Internal Control en Internal Audit ([KPMG18]) wordt RPA volop toegepast. Enkele concrete voorbeelden van gerobotiseerde processen zijn: verwerken van facturen in het Enterprise Resource Planning (ERP)-systeem, invoeren van journaalboekingen, opstellen van financiële rapportages vanuit verschillende gegevensbronnen (Finance) en verwerken van indiensttredingsproces van nieuwe medewerkers (HR). RPA wordt in sommige organisaties ook gebruikt als interim-oplossing, voorafgaand aan bijvoorbeeld het implementeren van een nieuw ERP-systeem. Daarnaast zien we combinaties ontstaan tussen bijvoorbeeld RPA en AI, zoals bij AML-processen waarbij RPA de data verzamelt, AI de data analyseert met gebruik van geavanceerde algoritmes en RPA de uitkomsten rapporteert.

Bij het implementeren van RPA is het van belang om in een vroeg stadium na te denken over de impact van robotisering op de organisatie. Het 6×6-robotics-implementatiemodel (zie figuur 2) ondersteunt organisaties met het implementeren van RPA en het beoordelen van de impact op de organisatie, de manier waarop softwarerobots worden ontwikkeld, de relatie met bestaande IT-infrastructuur, risico’s en beheersing en uiteindelijk de impact hiervan op medewerkers ([Jutt18]). Het artikel van [Jutt18] gaat dieper in op de werking van het 6×6-robotics-implementatiemodel. Het vijfde element van dit model, ‘Performance and Risk Management’ focust zich op nieuwe risico’s die zich voordoen bij het implementeren van RPA.


Figuur 2. Het KPMG 6×6-robotics-implementatiemodel. [Klik op de afbeelding voor een grotere afbeelding]

Veelvoorkomende RPA-risico’s

Wie is verantwoordelijk?

Bij aanvang van een RPA-implementatietraject wordt vaak in een vroeg stadium al de vraag gesteld wie de eigenaar moet worden van de gedragingen en uitkomsten van de softwarerobot. In veel gevallen wordt deze vraag als een uitdaging gezien, omdat diverse partijen een bepaalde verantwoordelijkheid dragen bij RPA-implementaties, waaronder de business, IT-afdelingen, Center of Excellences en leveranciers van de RPA-tooling. Vanuit een businessperspectief wordt de softwarerobot gezien als een vervanging of ondersteuning van een normale medewerker en daarom houdt de business zichzelf verantwoordelijk voor de werking van de softwarerobot. Dit argument wordt versterkt doordat de softwarerobot vaak een deel van het proces oppakt en daarna weer overdraagt aan een medewerker. Daarnaast is specifieke proceskennis nodig om een robot te implementeren en beheren en alleen de business bezit deze kennis. Echter, vanuit IT-perspectief wordt de softwarerobot gezien als een applicatie met users en daarom dient IT verantwoordelijkheid te dragen voor de implementatie en het beheer van de robot.

Organisaties gaan verschillend om met het eigenaarschap van de softwarerobots. Vaak wordt een RPA-implementatie-initiatief gestart vanuit de backoffice (Finance) en is de CFO daardoor de direct verantwoorde. Er zijn ook organisaties waar het eigenaarschap komt te vallen onder IT, waardoor de CIO verantwoordelijk is voor de softwarerobots. Voor welke vorm van eigenaarschap van softwarerobots ook wordt gekozen, het is van belang de juiste kennis op te doen en alle stakeholders tijdig te betrekken bij de implementatie om RPA-specifieke risico’s te mitigeren.

Functiescheiding met robotaccounts?

Ook op het gebied van functiescheiding, het vierogenprincipe en het toepassen van robotgebruikersaccounts zijn veel discussies gaande. Op een traditionele Finance-afdeling stelt een medewerker een factuur op en accordeert een tweede medewerker deze in het systeem. Hierdoor kan functiescheiding en juiste autorisatie worden vastgesteld. Wat is het gevolg als dit proces door een robot wordt uitgevoerd? Dienen er dan twee aparte robots te worden gecreëerd voor beide processtappen (bijvoorbeeld Robot_01 en Robot_02) zodat functiescheiding in stand blijft? Of is functiescheiding binnen het proces niet meer relevant? Wat betekent robotisering voor interne controles in het proces? Dit zijn vraagstukken waar de business, riskfuncties en auditors mee te maken hebben bij het beheersen van RPA-risico’s ([Chua18]).

Bovenstaande voorbeelden zijn slechts twee RPA-risico’s waar organisaties in de praktijk tegenaan lopen. Vanuit een breder perspectief denken organisaties na over mogelijke ‘what could go wrong’-scenario’s met de komst van RPA. Figuur 3 geeft een (niet uitputtend) overzicht weer van risicocategorieën met daarbij voorbeelden van risico’s uit de praktijk die zijn geconstateerd bij het auditen van gerobotiseerde processen. Dit betreffen zowel IT- als procesgerelateerde risico’s. Een bekend risico op het gebied van IT is dat robotgebruikersaccounts (bot IDs) onvoldoende worden beveiligd, waardoor deze mogelijk ten onrechte worden gebruikt door medewerkers om transacties te verwerken. Vanuit een procesgedachte bestaat het risico dat bepaalde essentiële controles binnen het proces niet meer worden uitgevoerd, omdat de business het werk overlaat aan de robot. Hierdoor komt het voor dat afwijkingen in het proces niet tijdig worden geconstateerd. Dit kan voor de organisatie resulteren in het ontstaan van nieuwe risico’s. Verder is het van belang om bij het identificeren van risico’s rekening te houden met de gekozen RPA-softwareoplossing. Er bestaan veel verschillen tussen de RPA-softwaretechnologieën in relatie tot hoe zij binnen de softwarepakketten omgaan met specifieke RPA-risico’s.


Figuur 3. Praktijkvoorbeelden van RPA-risico’s per risicocategorie. [Klik op de afbeelding voor een grotere afbeelding]

Beheersing van RPA-risico’s

Nadat een organisatie heeft geïdentificeerd welke risico’s zich mogelijk voordoen bij het robotiseren van bedrijfsprocessen, denken de business, IT en het robotics team (eventueel onderdeel van het Center of Excellence) gezamenlijk na over de beheersing hiervan. In de praktijk blijkt dat vanuit enthousiasme en onbekendheid met de nieuwe technologie deze stap onvoldoende doordacht wordt genomen. Dit kan uiteindelijk leiden tot gerobotiseerde processen waarbij onvoldoende is nagedacht over de nieuw opgetreden risico’s. Daarom zijn beheersmaatregelen nodig wanneer een organisatie overgaat tot het implementeren van RPA. In lijn met de geïdentificeerde risico’s zijn de controls te classificeren in twee categorieën, namelijk (1) General IT Controls (GITCs) en (2) procesgerelateerde controls.

  1. GITCs voor RPA zijn vaak onderdeel van een RPA governance en control framework en onder meer gefocust op de vraag of robots werken zoals vooraf is bedoeld en in hoeverre de juistheid, volledigheid en integriteit van data worden gewaarborgd. Tijdens de ontwerpfase zijn controls nodig voor het ontwikkelen van RPA-scripts bestaande uit bijvoorbeeld RPA-ontwikkelstandaarden, toegangsbeveiliging voor robotaccounts en wachtwoorden, toegangsbeveiliging voor data die de robot nodig heeft om het proces uit te voeren en uitvoerige testen met realistische testscenario’s om de juiste werking van de robot in een testomgeving vast te stellen. Nadat het gerobotiseerde proces in een productieomgeving actief is geworden, is het van belang incidenten omtrent de robot tijdig te constateren en op te volgen. Omdat robots werken via de userinterface van bestaande applicaties, die onderhevig zijn aan veranderingen, kan het voorkomen dat de robot zelf ook aanpassingen behoeft. Hiervoor dient een RPA-change-managementproces te worden nageleefd. Voor een effectieve werking dienen deze beheersmaatregelen voor ieder gerobotiseerd proces consistent te worden toegepast gedurende de gehele solution-developmentlevenscyclus. Zie ook figuur 4 inzake IT-beheersmaatregelen specifiek voor RPA.
  2. Daarnaast is het van belang om voorafgaand aan het robotiseren van processen een risicobeoordeling te maken specifiek per proces. Wanneer organisaties overgaan tot het robotiseren van bijvoorbeeld financieel kritische processen, maar daarbij onvoldoende nadenken over de processpecifieke risico’s, ontbreken de relevante beheersmaatregelen. Het is daarom van belang om voorafgaand aan het robotiseren van een proces een risicoanalyse per proces te verrichten. Uit deze analyse kan bijvoorbeeld blijken of de business verantwoordelijk blijft voor uitvoeren van bepaalde inputcontroles, procesgerelateerde goedkeuringen en afwijkingenanalyses, of dat de robot een deel van deze controles uitvoert. In het laatste geval kunnen deze controles als application controls worden toegevoegd in het ontwerp van het proces. Verder blijft de business (deels) verantwoordelijk voor de performance van de robot en zullen periodieke controles moeten plaatsvinden om vast te stellen of de robot alle transacties inclusief uitzonderingen juist en volledig heeft verwerkt. De risicobeoordeling per proces dient herzien te worden als het proces conform het change-managementproces genoemd bij punt 1 wordt aangepast. Hierbij zal opnieuw een analyse gedaan moeten worden op de risico’s en de bijbehorende mitigerende maatregelen.

Het identificeren, analyseren en beheersen van risico’s van gerobotiseerde processen is een dynamische activiteit waarover niet alleen gedurende de implementatiefase nagedacht moet worden. Het is van belang dat deze activiteiten deel uitmaken van het standaard internal-audit-/controlproces.

Auditen van robots

Nadat organisaties zijn gestart met een proof of concept (waarin de werking van de RPA-technologie is aangetoond) en vervolgens het aantal gerobotiseerde processen toeneemt, komen robots doorgaans in het vizier van auditors. Uiteraard is het van belang bij een audit op RPA een analyse uit te voeren op het risicoprofiel van de gerobotiseerde processen. Echter, wanneer financieel kritische processen worden uitgevoerd door robots en de medewerkers die voorheen het proces uitvoerden niet meer werkzaam zijn bij de organisatie, wordt de vraag of de robot betrouwbaar werk levert, steeds relevanter. Het daadwerkelijk auditen van robots vergt een aanpak die nieuw kan zijn voor auditors. Specifieke kennis over de RPA-softwareoplossing en van de achterliggende geprogrammeerde code is vereist, alsmede kennis van het gerobotiseerde proces. Uit de praktijk blijkt dat het auditen van robots vaak wordt uitgevoerd door een samengesteld team, bestaande uit zowel financial auditors als IT-auditors.

Voor auditteams is het van belang te kunnen steunen op de effectieve werking van de interne beheersmaatregelen rondom de robot. Alvorens een conclusie te kunnen trekken over de betrouwbaarheid van de gerobotiseerde processen, focussen auditteams zich op de volgende stappen:

  • begrip krijgen van de bestaande RPA-governance, inclusief rollen, verantwoordelijkheden, processen, de geïmplementeerde IT-infrastructuur en inzicht in de aanwezige/geregistreerde gerobotiseerde processen middels een RPA-inventory;
  • begrip krijgen van het risicoprofiel van de gerobotiseerde processen (use cases) en inzicht verkrijgen in welke processen op de planning staan om gerobotiseerd te worden in het komende jaar;
  • inzicht krijgen middels walkthrough met het robotics team en de business owner om de risico’s en application controls te bepalen;
  • inzicht krijgen in alle procesgerelateerde informatie omtrent het gerobotiseerde proces, waaronder bot IDs, applicaties waar de robot mee werkt, in- en outputbestanden van de robot, proceseigenaren, technische eigenaren et cetera;
  • op basis van de geïdentificeerde robots en het bijbehorende risicoprofiel analyseren welke robots relevant zijn om auditwerkzaamheden op uit te voeren;
  • overeenkomstig met de eerdergenoemde RPA-risico’s en bijbehorende interne beheersmaatregelen focussen op het vaststellen van de opzet, het bestaan en de effectieve werking van de GITCs, application controls en procesgerelateerde controls.

Net als andere applicaties en infrastructurele componenten dienen ook softwarerobots goed beheerd te worden en dus vanuit IT gezien te vallen onder de IT-beheersmaatregelen. Deze beheersmaatregelen dienen de continuïteit en de juiste werking van de geautomatiseerde processen te waarborgen en te voorkomen dat ongeautoriseerde wijzigingen kunnen plaatsvinden dan wel dat gebruikers zich ongeautoriseerd toegang kunnen verschaffen tot de gerobotiseerde processen en de RPA-tool. In figuur 4 hebben we een aantal beschouwingen opgenomen inzake de algemene IT-beheersmaatregelen specifiek voor RPA.


Figuur 4. IT-beheersmaatregelen voor RPA. [Klik op de afbeelding voor een grotere afbeelding]

Om een duidelijk beeld te krijgen van wat de precieze gedragingen zijn van een robot, kan het zeer nuttig zijn om de verrichte transacties van robotgebruikersaccounts nader te analyseren. Op basis hiervan zijn uitzonderingen in het proces of bijzonderheden gemakkelijker op te merken en is direct duidelijk of de robot geen transacties verwerkt die in de basis niet zijn gerobotiseerd. Middels de technologie van process mining ([Bisc19]) kan het gerobotiseerde proces, inclusief procesuitzonderingen verwerkt door de robot, eenvoudig inzichtelijk worden gemaakt. Het auditteam kan de uitkomsten van deze analyse gebruiken om bijvoorbeeld afwijkende transacties van de robot nader te analyseren.

Klantcasus RPA en internal audit

KPMG is betrokken geweest bij een internal audit op gerobotiseerde processen van een internationale organisatie. Aangezien RPA een automatiseringsoplossing is die zowel business- als IT-kennis vraagt, bestond het internal-auditteam uit teamleden met verschillende disciplines. KPMG heeft de gerobotiseerde processen gecontroleerd op basis van ervaringen met RPA-specifieke risico’s en implementaties van bijbehorende interne beheersmaatregelen. Uit deze audit bleek onder meer dat gerobotiseerde processen in productie werden genomen zonder voldoende te zijn getest en dat gebruikersaccounts van robots werden misbruikt voor het verrichten van transacties buiten de scope van de robot om. Om succesvol nieuwe technologieën zoals RPA te implementeren is het van belang een risico-inschatting te maken van specifieke RPA-risico’s en hier in een vroeg stadium de juiste relevante stakeholders bij te betrekken. Dit zorgt ervoor dat gerobotiseerde processen voldoen aan compliancestandaarden en RPA-risico’s zijn gemitigeerd.


De komst van nieuwe technologieën zoals RPA laat grote voordelen zien voor organisaties. Om uiteindelijk succesvol te zijn in het verbeteren van bedrijfsprocessen, is het van belang om in een vroeg stadium rekening te houden met RPA-specifieke risico’s. Het is essentieel om kritisch na te denken welke interne (IT-)beheersmaatregelen van toepassing zijn bij het implementeren en beheren van gerobotiseerde processen. Uiteindelijk is het voor (internal-)auditteams belangrijk om de juiste disciplines erbij te betrekken, te begrijpen wat deze nieuwe technologie inhoudt en hoe men te werk gaat wanneer de auditklant gerobotiseerde processen heeft binnen de (backoffice)processen.


[Bisc19] Di Bisceglie, C., Ramezani Taghiabadi, E., & Aklecha, H. (2019). Data-driven insights to Robotic Process Automation with Process Mining. Compact 2019/3. Geraadpleegd op:

[Chua18] Chuah, H. & Pouwer, M. (2018). Internal Audit en robotic process automation (RPA). Audit Magazine nr. 4, 2018. Geraadpleegd op:

[Boer19] Boer, M. de & Leupen, J. (2019). DNB grijpt in: Rabo moet tienduizenden dossiers opnieuw doorlichten op witwasrisico’s. Financieel Dagblad, 22 november 2019. Geraadpleegd op:

[Hend19] Hendriks, J., Peeters, J., Pouwer, M., & Schmitt Jongbloed, T. (2019). How to enhance Master Data Management through the application of Robotic Process Automation. Compact 2019/3. Geraadpleegd op:

[Jutt18] Juttmann, J. & Doesburg, M. van (2018). Robotic Process Automation: how to move on from the proof of concept phase? Compact 2018/1. Geraadpleegd op:

[KPMG18] KPMG Nederland (2018). Internal Audit and Robotic Process Automation. KPMG Assets. Geraadpleegd op:

Network theory in audit

We strive for the highest audit quality in every engagement, although increasing complexity can make this hard. The field of network theory deals with complex systems and enables us to study these systems in a formal way. What can we learn from this research field and how would this impact the audit?


Auditing is not a simple task, ask around, try to find an auditor and they will explain to you that an audit is a major and complex undertaking. During the audit, a team of auditors invade the office floors of the auditee for days, weeks or even months. You will find auditors in every corner of your building, organized according to a meticulous planning, executing their job with the utmost precision. The auditors perform a variety of audit procedures until they have collected sufficient evidence to issue their much-valued product, the audit opinion. In this article, we will look at new techniques that could improve this evidence-gathering strategy.

These rigorously designed audit procedures describe how evidence is collected to audit financial positions on financial statements. However, the auditor might be able to optimize the planning of these procedures by taking the interdependencies between financial positions into account. These interdependencies are everywhere, for example, when goods are sold, we expect to receive a payment and we expect that raw goods are bought to replace the sold inventory. These process dependencies create dependencies between the monetary flows in the financial statements.

An interesting question to ask is how these dependencies impact the evidence gathering process. Could auditors optimize their strategy because components depend on each other? This problem of a system of interacting or dependent components is so common that this resulted in a research field of its own, the field of complexity science, and to be a bit more specific, the field of network theory that deals with these kinds of systems.

The recent rise of the complexity research and network theory boosted the understanding of many other complex systems from a data driven perspective, e.g. social networks ([Borg09]), inter-banking network ([Batt12], [Batt16]), product networks between countries ([Tach12]) and metabolic networks in biology ([Estr06], [Rava02]). In this article, we briefly discuss how networks are the common factor in many domain problems, and then we focus on the question: “Can we view the financial statements as a network and how will this impact the audit?”

Capture complexity

Complexity science and network theory are areas which focus on the bigger interconnected picture. Many systems and problems have a natural representation as a network. To be more specific, these systems can be described as a set of nodes and edges that connect the nodes, for example, our social network ([Borg09]), where the nodes are persons and the edges represent friendships. Or the banking networks ([Batt12], [Batt16]), where the nodes are banks and the edges represent lines of credit. One of the more interesting facts is that many of these network algorithms are general but work on various specific systems that can be represented as networks. Therefore, it is interesting to see if other problems can also be represented as networks so we can apply all these general algorithms in new domains.

For example, understanding the outbreak and spreading of the SARS virus ([Bara16]). Where SARS spreads through a social network, people can be Susceptible, Infectious or Recovered (SIR) from SARS. The same SIR model can also be used to understand how computer viruses spread, whereby a computer can be either Susceptible, Infectious or Recovered from a computer virus ([Bara16]). Different domain, different phenomena; the same analysis techniques.

With these types of analyses, we are able to answer domain-specific questions that remained unanswered or difficult to answer before. In the last couple of decades, research activity in the network theory surged, which resulted in new types of analyses on networks. Scientists developed algorithms to answer generic questions like “What is the most important node?” ([Bara16]) or “How are the connections organized?” ([Bara16]). The answers to these questions provide various insights when applied to a specific domain. For example, the question “What is the most important node?” applied to the internet network, helps us understand which nodes we need to protect in order to keep our internet online or how to change the design of the internet to make it more robust. So, what if we can turn data that is readily available in the audit into a network? Can we translate these generic questions to solve specific audit problems?

Financial statements as networks

The balance sheet and profit and loss statement form a network of interacting components that can be studied and understood ([Boer18]). Think of it, financial accounts as nodes in a network and business processes moving the money around in this financial system. Selling goods, buying inventory, receiving payments are the business processes that move value around from one set of financial accounts to another. Similar to blood flows in biological systems, where the heart is the engine that makes everything flow, or to electricity that moves through the electricity grid.

Our research ([Boer18]) proposes to extract a financial statement network from the journal entry data, such as displayed in Table 1.


Table 1. Journal entry data. [Click on the image for a larger image]

These entries tell us how much monetary value flows in and out of certain financial accounts. The company has thousands maybe even millions of these journal entries describing the money flows in detail. We can construct a network based on these journal entry data where we define two sets of nodes: financial account nodes and business process nodes. The financial accounts tell us what moved; and the business process tells us why it moved. Here, the business process nodes are all unique journal entry patterns, where each pattern represents a process or variation of a process. When we construct a network based on transactional data, we observe a large network which represents all the interactions between the components of the financial statements. In Figure 1 you can see an example of such a network based on journal entry data of an existing company.


Figure 1. The green nodes represent the financial accounts and the red nodes represent the business processes that moves monetary value from one set of financial accounts to another set of financial accounts. [Click on the image for a larger image]

The green nodes represent the financial accounts and the red nodes represent the unique business process patterns. The three clusters that emerge can be thought of as the three beating hearts that pump the money around. This network captures the complexity of the financial statements in a way that can be easily analyzed by computers.

How can these networks be used in the audit? To answer this question, we must look at the audit standards and procedures. Among the many audit procedures one category stands out: the substantive analytical procedures. The substantive analytical procedures enable the auditors to construct predictive models to detect if recorded values are inconsistent with other information available. The network tells us how the financial accounts relate to other monetary flows in the financial system of a company ([Boer18]). The auditor can read two important aspects from the network: first, what flows and second, why it flows. When the why and what are combined, a predictive model can be constructed to check inconsistencies. For example, there is a revenue outflow; the what, due to a sale; the why, and therefore we can predict the movement of goods. In case of no inconsistencies, the auditor can turn this into audit evidence ([Boer19]). This means that other audit procedures might not be required anymore because the auditor could leverage the knowledge of interacting components in the financial statement.

The above description is an example of a case where networks are useful for an auditor, the research field of network theory is much bigger, however. Other domains like biology and chemistry have similar objectives: to understand and predict the behavior of systems. Therefore, a natural question to ask is whether we can borrow techniques from these domains to better understand the financial account balances and monetary flows. This financial statement network representation enables us to develop these generic audit algorithms that can be applied to the specific financial statement networks of the clients being audited.


The creation of financial statement networks is a new and different way to analyze the financial statements. We can use readily available transactional data to create this network. This network can be used to create predictive models for substantive analytical procedures which, in case of no inconsistencies, yield audit evidence.

However, to really understand the possible impact of these networks on the audit, we need to perform more research, although these early results are promising. We don’t know what the future will bring, but we will definitely follow this path to analyze the balance sheet in a data-driven way and we can only hope that it will bring the same progress compared to many other research fields. To be continued …


[Bara16] Barabási, A.-L (2016). Network Science. Cambridge University Press.

[Batt12] Battiston, S. et al. (2012). DebtRank: Too Central to Fail? Financial Networks, the FED and Systemic Risk. Scientific Reports2(541).

[Batt16] Battiston, S. et al. (2016). Complexity Theory and Financial Regulation. Science, 351(6275), 818-819.

[Boer18] Boersma, M. et al. (2018). Financial Statement Networks: An Application of Network Theory in Audit. The Journal of Network Theory in Finance, 4(4).

[Boer19] Boersma, M. et al. (2019). Audit Evidence from Substantive Analytical Procedures. Oral presentation at the American Accounting Association Annual Conference 2019.

[Borg09] Borgatti, S.P. et al. (2009). Network Analysis in the Social Sciences. Science323(5916), 892-895.

[Estr06] Estrada, E. (2006). Virtual Identification of Essential Proteins within the Protein Interaction Network of Yeast. Proteomics6(1), 35-40. Retrieved from:

[Pole16] Poledna, S. & Thurner, S. (2016). Elimination of Systemic Risk in Financial Networks by Means of a Systemic Risk Transaction Tax. Quantitative Finance16(10), 1599-1613.

[Rava02] Ravasz, E. et al. (2002). Hierarchical Organization of Modularity in Metabolic Networks. Science297(5586), 1551-1555.

[Tach12] Tacchella, A. et al. (2012). A New Metrics for Countries’ Fitness and Products’ Complexity. Scientific Reports2(723).