Skip to main content

Audit Analytics

The internal audit function plays an important role within the organization to monitor the effectiveness of internal control on various topics. In the current data-driven era we are living in, it will be hard to argue that internal control testing can be done effectively and efficiently by manual control testing and process reviews. Data analytics seems to be the right answer to gain an integral insight into the effectiveness of internal controls, and spot any anomalies in the data that may need to be addressed. Implementing data analytics (audit analytics) within the internal audit function is however not easily done. Especially international organizations with decentralized IT face challenges to successfully implement audit analytics. At Randstad we have experienced this rollercoaster ride, and would like to share our insights on which drivers for success can be identified when implementing audit analytics. In the market we see a large focus on technological aspects, but in practice we have experienced that technology might be the least of our concerns.

Data Analytics within the internal audit

The key benefits in performing data analytics within the internal audit function are predominantly:

  1. more efficiency in the audit execution;
  2. cover more ground in the audit executions;
  3. create more transparency and more basis for audit results/findings.

C-2018-4-Idema-01-klein

Figure 1. Classical internal audit control testing vs. control testing and data analytics. [Click on the image for a larger image]

Introduction: Audit Analytics, obvious benefits

As an example, we would like to ask you this question: what is the main topic during seminars, conferences concerning data analytics? What is most literature about? How is the education system (e.g. universities) responding?

Chances are your response will involve technology: you read about great tooling, data lakes, the need for you to recruit data analysts and, very important, you must do it all very quickly.

However, the biggest challenge is not technology. The IT component is obviously important, but it is ill-advised to consider the implementation of audit analytics to be just about technology.

Of course, we are not here to argue with the fact that technology is a very important driver to enhance the capabilities of the auditor. The benefits of being able to run effective data analytics within the internal audit function are obvious. The same seminars will usually underline these benefits as much as they can: data analytics in the audit will provide the auditor with more efficiency in his work, and will enable him to cover more ground in the audit executions. Moreover, the auditor will be able to create more transparency and basis for audit results towards the business.

Rightly so, the benefits are obvious, and it seems easy enough to implement. These benefits are well known within the audit community. The benefits are however usually based on the premise of having a good technological basis as the key to success. One could hire a good data analyst and build an advanced analytics platform, and just go live. In some cases, this could work, reality is, however, more challenging.

In most cases more complex factors play a pivotal role in the success of audit analytics, which are not easily bypassed by just having a good data analyst or a suitable analytics platform. Factors that most corporate enterprises must deal with are, for example:

  • a complex corporate organization (internationally oriented and therefore different laws, regulations and frameworks, decentralized IT environment and different core processes);
  • the human side of the digital transformation that comes with implementing audit analytics;
  • integrating the audit analytics within the internal audit process and way of working.

The success of really transforming the internal audit function to the internal audit of tomorrow is where data analytics is embedded in the audit approach, and a prerequisite of gaining insights into the level of control and the effectiveness thereof with the auditee’s organization.

This success we are talking about is currently perceived as a ‘technical’ black box. In our opinion this black box is not only of technological nature, but has a lot more complexities added to it. In this article, we tend to open this black box and offer some transparency based on the experiences we have gained in setting up the internal audit analytics function.

To do this, we will start to describe the experiences we have gained in setting up audit analytics. Based on our story, we will outline our key drivers for success, regarding the implementation of audit analytics. Furthermore, we will move in to detail on how these drivers influence each other, and the importance of a balancing act between the drivers.

Case study: how the Randstad journey on Audit Analytics evolved

In 2015, Randstad management articulated the ambition to further strengthen and mature the risk and audit function. The risk management, internal controls and audit capabilities were set up years before. Taking the function to the next level meant extending existing capabilities and perspectives, with the intent not only to ensure continuity, but also to increase impact and added value to the organization. Integrating data analytics into the way of working for the global risk and audit function was identified as one of the drivers for this.

To set up the audit analytics capability, several starting points were defined:

  • build analytics once, use often;
  • in line with the organization culture, the approach has to be pragmatic: it will be a ‘lean start-up’;
  • the analytics must be ‘IIA-proof’;
  • security and privacy have to be ensured;
  • a sustainable top down approach, rather than datamining and ad hoc analytics.

This meant that a structured approach had to be applied, combined with the willingness to fail fast and often. In the initial steps of the process, technical experts were insourced to support the initiative in combination with process experts. With a focus on enabling technical capabilities, projects were started to run analytics projects in several countries, and in parallel develop technical platforms, data models and analytics methodologies.

With successes and failures, lots of iterations, frustrations and, as a result, lessons learned, this evolved into a comprehensive vision and approach on audit analytics covering technology, audit methodology, learning & development and supporting tooling & templates. Yet at the same time, it resulted into an approach, which is in line with the organizational context, fit for its purpose, and general requirements, such as GDPR.

Yet at the same time, the results of the analytics projects and contribution to audits showed interesting and promising results: by the end of 2017 a framework of 120 analytics was defined, out of which 65 were made available through standard and validated scripts.

Drivers for successful Audit Analytics

Looking back at the journey and summarizing the central perspective, a model emerged identifying drivers for successful audit analytics. The journey the Randstad risk & audit function has gone through, addressed challenges in five categories: organizational fit, internal organization of the audit analytics, supporting technical tooling and structure, audit methodology alignment and integration and skills, capabilities and awareness.

At different times in the journey, different types of challenges emerged. For example, when Randstad started expanding methodologies and technologies, the next challenge became the fit and application within the organizational context. This in turn translated into developments in the audit analytics organization. Then, the human component became a point of attention, translating into addressing audit methodology alignment and related skills/capabilities. In turn, this was further supported by updating the technical structures. During our journey it became clear to us that these challenges where not individual of nature, but are interacting with each other, and form an interplay of success drivers.

Next, we will go through the five identified challenges in more detail, and illustrate what this means in the Randstad context. In the following chapter we will try to transform these challenges into an early model, that will be the starting point in understanding which drivers have an impact on the success of audit analytics.

Implementing Data Analytics: key drivers for success

In the previous chapter, we discussed how Randstad experienced the implementation of audit analytics and the challenges and learnings that have been gained over the years. In this chapter we will further dissect the identified challenges, and translate them into drivers for success in implementing audit analytics.

Organizational fit

The first question one might want to ask before setting up an analytics program is: ‘how does this fit within the organizational structure?’. The organizational structure is something that cannot be changed overnight, and yet it will greatly determine how your data analytics program must be set up. In case this isn’t thought through carefully in the setup phase of audit analytics, one might encounter an organizational misfit regarding choices that have been made concerning technology or analytics governance. We will outline the key factors that will have an impact on the way audit analytics has to be set up:

  • centrally organized versus decentralized organization;
  • diverse IT landscape regarding key systems versus a standardized IT infrastructure;
  • many audit topics versus a high focus on a few specific audit topics;
  • uniform processes among local entities versus divers processes among local entities;
  • similar law, regulations and culture versus a diverse landscape of law, regulations and culture;
  • aligned risk appetite versus locally defined risk appetite.

We do not perceive the organizational structure as a constant that cannot be changed, but as something that cannot be greatly influenced in order to set up the analytics organization. Therefore, one must consider how the analytics organization can best fit the overall organizational structure, rather than the other way around. The lessons learned regarding this challenge are that it is vital to assess how the organizational structure may impact your methodology and setup for audit analytics. This can have both huge technical as procedural implications on setting up your audit analytics organization.

Audit analytics organization

The way the audit analytics organization is set up, is also a key driver for success. There are a lot of choices one can make when setting up the organizational structure, and defining the roles and responsibilities within the analytics or audit team. The key decision is perhaps whether or not you want to carve out the analytics execution and delegate this to a specialized analytics team, or whether you want to assign the role of data analyst to the auditor himself. This decision will have a great impact on a lot of other things that have to be considered when setting up a data analytics program. Other key factors that we have identified in setting up the analytics organization:

  • central data analyst team versus analytic capability for the local auditor;
  • a central data analytics platform or a limited set of local tooling;
  • separate development, execution and presentation versus generic skillset.
Audit analytics technical structure

The third key driver we mention is the technical structure that will enable the auditors to execute the data analytics. As said in the introduction, this topic usually plays a central role at seminars, conferences and literature about data analytics. In our opinion it is one of many drivers for success, and not necessarily the most important one. Decisions that have been made earlier in the previous two drivers will have a big impact on what choices one should make in this area. Areas to consider are:

  • implementing a central platform or work with local tools;
  • implementing an advanced data analytics platform versus choosing low entry data analytics tooling;
  • using advanced data visualization tools or using low entry visualization tools.
Audit methodology alignment and integration

The fourth key driver is – in our experience – the one that is overlooked the most. People tend to see it as a given that auditors will embrace data analytics, once it is widely available to them. On the contrary however, we have seen that auditors are somewhat reluctant in changing their classical way of working in a more data-driven audit approach. The way that data analytics will be integrated within the audit program or audit methodology is a pivotal factor for success. The right decisions regarding the following topics need to be made:

  • integrate the analytics in the audit methodology through a technology-centered approach (technology push) versus an audit-centered approach (technology pull);
  • data analytics as a serviced add-on to the audit approach versus a fully integrated audit analytics approach;
  • data analytics as pre-audit work to determine focus and scope versus data analytics as integral control testing to substantiate audit findings (or both!).
Skills, capabilities and awareness

The last defined driver for success is the skillset of the auditor to fully leverage data analytics. The skillset does not only imply the execution of the analytics, but also how to present the analytic results, and explain its implication to the business and the auditee. It is vital that the message comes across clearly, and how the analytics support the audit findings. This requires both technical knowledge regarding the analytics process and generic audit know-how, but most importantly it requires in-depth knowledge on the data the analytics have been executed on. The following key choices need to be made:

  • relying on well-equipped data analysts to execute and deliver audit analytics versus training the auditor in order to equip him or her to execute audit analytics;
  • relying on data analysts to interpret and understand analytics results and its impact on the audit versus relying on the auditor to understand the data and the executed analytics and its impact on the audit.

The crucial choice that needs to be made, is whether you will rely on the auditor to further deal with the audit analytics, or will they be assisted by a data analyst to perform the technical part of the audit analytics? If you choose the latter, you will be faced with the challenge to create an effective synergy and collaboration between the two, where the analyst understands the data and the executed analytics (and visualizations), and the auditor can put the implications of the analytic results in place for the audit findings. To create such a synergy might not seem a major challenge at first. We have however experienced that data analytics is not part of the DNA of all auditors, and therefore it might be a bigger step to take then initially presumed.

Model: drivers for success

When we summarize our experiences in a visual overview, it would look like the model presented in Figure 2 below. The model shows the interconnection between all the five drivers that have an impact on the success of your data analytics implementation. A decision made regarding one of the key drivers will in the end influence the decisions that have been made regarding one or more other drivers. Each organization that is about to implement data analytics must find the right balance between the identified drivers for success.

C-2018-4-Idema-02-klein

Figure 2. Key drivers for success. [Click on the image for a larger image]

Implementing data analytics – a balancing act

Does having this model make audit analytics successful? For Randstad, it is not this model that made audit analytics where it is today. Assembling all the lessons learned and the root cause analyses of all the ‘failures’ the organization experienced, this model emerged. It is as such a reflection of what management has learned in the journey so far. At the same time, it is a means to an end: it facilitates discussions in evaluating where an organization stands today regarding analytics, and which key challenges need to be addressed next. It thereby supports management conversations and decision-making moving forward.

Ultimately, it is all about bringing the configuration of the different drivers in balance. In implementing audit analytics, Randstad has, so far, failed repeatedly. In most cases the lessons learned analyzed: where there is pain, there is growth. It identified that an unbalance in the configuration of the drivers was the root cause.

There are multiple factors to be addressed during the implementation. These factors can sometimes be opposites of each other, creating a ‘devilish elastic’. In the Randstad journey a big push was made to set a central solution, facilitate a technical platform and integrate analytics in the audit methodology. In the course of 2017 a lesson learned was picked up from this push: the projects that were run centrally yielded positive results. It did not translate however into the global risk and audit community to start running audit analytics throughout their internal audit projects.

Randstad has set the technology and methodology; the key is to now also bring the professionals to the same page in the journey. As audit analytics in its potential, fundamentally changes the way you run audits, it also means changing the auditor’s definition of how to do its work. Overwriting a line of thinking that has been there for a very long time is maybe one of the biggest challenges there is. Therefore, when the questions are asked on what Randstad is doing currently to implement audit analytics, and what the current status is, the answer is as follows: ‘To get audit analytics to the next level, we are currently going through a soft change process with our risk & audit professionals, to get them to not only embrace the technology, but also the change in the way we perform our audits as a result. Overall, we are at the end of the beginning implementing audit analytics. Ambitious to further develop and grow, we are looking forward to failing very often, very fast.’

Concluding remarks

In this article we summarized the importance of data analytics in the internal audit, the road that Randstad has taken to implement audit analytics, the challenges and lessons learned along the way, and how these lessons can be translated into a model of key drivers for success. The goal of this article is to codify our journey and corresponding lessons learned, and communicate the experiences to the audit community, so they can profit from them.

The important message that we want to communicate, is that implementing audit analytics is far from just a technological challenge. By implementing data analytics, a large change is brought to the internal audit organization. The auditors need to change their way of working, and make a paradigm shift towards a data-driven audit. One cannot underestimate the implications of such a change by focusing mainly on the technological success of data analytics.

Reflecting on these developments and audit analytics initiatives, one should really take this paradigm shift into account. What does audit analytics mean in the old world, but also in the new world?

Ultimately a key challenge is how to go through the paradigm change, and how to support (or not support) auditors in making the paradigm change? Understanding and embracing audit analytics might not be their biggest challenge! Something Randstad Global Business Risk & Audit is working on, daily!

(Data) science in financial audits

A data-driven audit where advanced modeling will be the core of the audit: we briefly discuss how cheaper computing power, academic research and open-source software development are important in realizing this future and what we should consider when we build models for the purpose of a financial audit.

(Data) science in financial audits

Auditor: “Hi Siri, I am looking at the financial report of Shell. What do you think?” – Siri: “I’ve used 60.000 public data sources and run 1.5 million simulations. The financial numbers of Shell are very likely to be a fair representation given the current economic circumstances. However, it seems that their margins have increased which cannot be explained by any other available data. I would recommend asking how they’ve achieved that.”

A financial audit invoked through Siri, will this be a crazy statement or the audit of the future? You might be wondering how we can build this future? In this article we will discuss three aspects that could help us realize this future:

  1. Economics of computer models: why would modeling make sense from an economic perspective?
  2. Academia and open-source development: how can audit innovation leverage advances made in other research areas? More specifically, how could we utilize the faster moving academic and open-source community better?
  3. Data and modeling: how can we start building models that can be used in audits?

Economics

Siri used 60.000 data sources and performed 1.5 million simulations to assess if the financial numbers made sense. Storing that amount of data, designing and running those simulations is costly. Therefore, using simple economics, we must assess whether it is profitable to use computer simulations to assess the financial numbers.

In 1966-67, Benston [Bens66] and Jensen [Jens67], discussed the application of a regression model that could estimate financial positions. The cost of using this model is around 30 USD and it was considered to be an efficient tool ([Bens66]). If we compare these costs with computer models we use nowadays, we see an astonishing drop in costs. Take for example a Tesla Autopilot. To drive your car, it continuously collects sensor data and performs all kinds of computations to keep the Tesla on the road. The costs of the Tesla Autopilot would have been astronomical if the prices would have been at the same level of 1967. An economically viable Tesla Autopilot at that time would have less sensors and less computational power, making it challenging to keep your car on the road. We require a certain minimum amount of data and computational power for the algorithm to be useful.

Similar for the audit models, we need to know if it is possible to reach a level of accuracy that is satisfactory and even then, we need to consider the cost-benefit perspective. Given the recent pricing of computer power and storage, it is likely that running data hungry models is viable from an economic perspective. We could enhance the auditor by enriching the audit workflow with machine learning models where a simple question must be answered: “Is the machine prediction cheaper than human prediction?”. In scenarios where this answer is affirmative, we will start building models, often replacing mind numbing tasks for the auditor, so that the auditor can focus on the more challenging tasks at hand (e.g. account valuation). Even when the answer is affirmative, we need to consider the costs of developing such a solution. In the next section we will explain how audit innovation could benefit from academic research and open-source development to significantly speed up the development timeline.

Audits and technology

Most technical developments in audit research coincide with research developments in other fields. A good example is the application of time series models in audits. Researchers started to develop more advanced econometric analyses like time-series analysis ([Box70]). This research was applied to the audit by Kinney in 1978. And this is not unique, similarly, in the field of computer science and database technologies, we see applied research in audits ([McCa82], [Groo89]) which now can be related to the field of IT Auditing. Researchers in the audit field cleverly try to incorporate technological advances from other fields in audits. This raises the question of which research wave we can catch in order to take audits to the next level.

In the statement made by Siri, we see that numerous public datasets are cross-checked with advanced simulations in order to determine if the numbers make sense or not. Data and modeling are the key ingredients that enabled Siri to do this analysis. Before we discuss the models, we first discuss how scientific advances can be used in practice. Therefore, we must understand how an idea transfers from a paper to the open-source world. Furthermore, an interesting aspect is that these open-source communities are moving faster and are now more impactful than ever before. To show the connection between the academic world and the open-source world, we discuss a couple of open-source projects below.

In 2003, Ghemawat et al. [Ghem03] published the Google File System paper in the Association for Computing Machinery (ACM), describing a way to store big data on commodity hardware. Doug Cutting implemented this in 2006 ([JIRA06]) as an open-source project which is now known as Hadoop. Hadoop is currently widely used to store big data by many large organizations. Storing a lot of data naturally created the urgency to start performing calculations on this large data set. UC Berkeley’s AMPLab created Spark in 2009, which is a distributed processing framework. Spark has been open-sourced in 2010 and had nearly a thousand collaborators in 2015 ([Zaha10], [APAC14]). Another open-source project initiated by Google is Kubernetes, as a way of automated deployment and scaling of applications. Kubernetes, as an open-source project, started in 2014 and is based on fifteen years of experience of Google in scaling applications. In 2018, Kubernetes won the ‘Most Impact’ Award with nearly 20.000 contributors, almost a million comments in the code management platform ([KUBE18]).

From the above-mentioned examples, it becomes clear that research finds its way into open-source implementations and there is a rapidly growing community supporting and improving these products. By using these open-source products, we can leverage decades of research progress and implementations provided by some of the best engineers the world has to offer. And this is nearly impossible to reproduce by any one company in the world. This is the big wave we can ride to take audits to the next level and benefit from the collaboration of thousands of people around the globe. Although, this mainly tells us that we should use open-source software to speed up the development, it does not tell us how we should develop the models. In the next section we explain why modeling can be used to audit an organization, but more importantly how the designers of these models should therefore be aware of potential pitfalls.

Why are models useful?

A model, which can be anything from a machine-learning model, econometrics model up to a physics model, is a simplified description of reality in mathematical terms. Modeling and simulation is an extremely powerful tool to understand a system and answer questions about that system ([STAN18]). Recent research by MIT [MIT14] provides an excellent illustration of how powerful these concepts are in doing something that seems to be impossible.

Imagine a situation with two rooms separated by soundproof glass. In one room there is a radio and a plant, in the other room, a person. The person wants to listen to the radio through the soundproof glass. Sounds, pun intended, impossible right? With some simple tools and some mathematics, however, we can solve this problem. Recent research [MIT14] shows that with a camera and an object in the other room, e.g. a glass, a bag of potato chips or a plant, is enough to reconstruct the sound in the other room. They used the knowledge that sound travels through air and impacts the objects in the room. They measured this vibrational impact on the plant with a high-speed camera and used this video to reconstruct the sound signal. Various mathematical concepts are used in this scenario to approximate the information of interest. This means that the music sounds similar, but is not exactly the same. In some models we can increase the approximation quality by doing more computations and this nicely connects to the economics of modeling. As a general rule of thumb the more accurate the models are the more costly they become.

Simulation and modeling are used in many areas, ranging from physics, to biology and chemistry, but also in the banking industry to calculate the risk exposure of the investment portfolio. Modeling techniques from agent-based models up to deep neural networks are developed for this purpose. These models are used to make accurate predictions about the system or to simulate the system under various conditions and study the output. In a similar fashion we can consider a company as a system which we want to understand better, and having a model would enable us to ask questions so that we achieve this result. In an audit, we can use cleverly constructed models to recover information of interest in a creative way. So, let’s be creative, very creative! In this way we can think of truly new ways of obtaining the information of interest for audits. Before we continue with creative implementations of models in audits, we briefly discuss how these models relate to the current way an auditor operates. This is helpful in assessing which models are helpful and which are not.

Auditors study information obtained from the client, which helps them create an understanding of how the organization operates. This understanding is then used to audit the client and assess if their financial statements are a fair representation. Next, we have a look at the objective of models. One of the purposes of computer simulations of a model is to gain a better understanding of the data (information from the client) we already have. The models can be used to retrodict, that is, infer about past events using present information. This creates a deeper understanding about how these events occurred ([STAN18]). The application of models to retrodict is not very different from the objective of the auditor, only the means by which we explain the past year events are completely different. The understanding of the auditor is closely related to the mathematical model, both the understanding of the auditor and the model are used to create a deeper understanding of the subject, namely the financial statements.

However, there is one important element we need to keep in mind, that the objective is to understand. Most companies are sitting on a big pile of data and it is tempting to mine this data to uncover patterns and use these patterns to make predictions. Nevertheless, the prediction itself is not the objective, but understanding is. Professor Peter Sloot refers to this as ‘Big Nonsense’, and illustrates this with an excellent example ([Sloo16]):

“Astronomers of the Maya civilization and astronomers of the Babylonian civilization were brilliant in predicting astronomical events. For instance, from meticulous observations of the Sun, Moon, Venus and Jupiter they were able to predict the 584-day cycle of Venus or the details of the celestial track of Jupiter with astonishing accuracy. Yet they had no clue about our heliocentric solar system, they believed that the earth was flat, and they were completely ignorant of the real movement of stars and planets while being convinced that the sky was supported by four jaguars, each holding up a corner of the sky.”

This example illustrates one of the potential dangers of uncovering patterns from historical data which we now know as Data Science or Big Data ([Sloo16]). The patterns uncovered might trick you into believing something that is not true. Therefore, the models need to be enriched with computational predictive models such that we can falsify or confirm our interpretations ([Sloo16]).

To build a model for the purpose of an audit will be challenging. Nevertheless, if the model is constructed properly then the model can be used to understand the past year activity of an organization. Parts of the audit could be replaced by such models that collect audit evidence. Maybe it is possible to reuse these models between audits in the same industry and calibrate the model to the specific needs of that organization. In any case, these models can analyze countless data sources and perform endless simulations tirelessly which cannot be achieved with manual labor. A hybrid version of the audit workflow where the auditor and the model complement each other could greatly enrich the experience of the auditor. But in order to develop the models, we need to perform research that can falsify or confirm the models and an essential ingredient for this research is the availability of data. Therefore, the clients play a role in the development as well, sharing their data and providing feedback.

Conclusion

We started this article with an imaginary situation where we asked Siri to audit the financial statements. Although this scenario is not yet a reality, we do see that technological developments are moving faster than ever before. Leveraging the advances made by the academic and the open-source community can shorten the development timeline dramatically. Furthermore, modeling is in many areas an economically viable business model. Nevertheless, due to the nature of the audit, we must take precaution in what kind of models we want to use and how we want to use them. We do not want to fool ourselves into thinking something that is not true. However, we must realize that an important different step must be taken. Developing this future takes effort from academia, engineers, auditors and, finally, audit clients collaborating intensively. We need the data and feedback of the clients to build these models and falsify or confirm them. The auditee plays a crucial role in developing this future, so we are counting on them as well!

References

[APAC14] APACHE, The Apache Software Foundation Announces Apache™ Spark™ as a Top-Level Project, Apache.org, https://blogs.apache.org/foundation/entry/the_apache_software_foundation_announces50, 2014.

[Bens66] G.J. Benston, Multiple regression analysis of cost behavior, The Accounting Review, Vol. 41(4), pp. 657-672, 1966.

[Box70] G.E. Box and G.M. Jenkins, Time Series Analysis Forecasting and Control, Wisconsin University, Madison Department of Statistics, 1970.

[Ghem03] S. Ghemawat, H. Gobioff and S.T. Leung, The Google file system, ACM, Vol. 37, No. 5, pp. 29-43, 2003.

[Groo89] S.M. Groomer and U.S. Murthy, Continuous auditing of database applications: an embedded audit module approach, Journal of Information Systems, Vol. 3(2) pp. 53, 1989.

[Jens67] R.E. Jensen, A Multiple Regression Model for Cost Control – Assumptions and Limitations, The Accounting Review, Vol. 42(2), pp. 265-273, 1967.

[JIRA06] JIRA, Initial import of code from Nutch, Apache.org, https://issues.apache.org/jira/browse/HADOOP-1, 2006. Accessed on: 29-10-2018.

[Kinn78] W.R. Kinney Jr, ARIMA and regression in analytical review: an empirical test, Accounting Review, pp. 48-60, 1978.

[KUBE18] Kubernetes, Kubernetes Wins the 2018 OSCON Most Impact Award, Kubernetes.io, https://kubernetes.io/blog/2018/07/19/kubernetes-wins-2018-oscon-most-impact-award/, 2018.

[McCa82] W.E. McCarthy, The REA accounting model: a generalized framework for accounting systems in a shared data environment, Accounting Review, pp. 554-578, 1982.

[MIT14] Larry Hardesty, Extracting audio from visual information, MIT News, http://news.mit.edu/2014/algorithm-recovers-speech-from-vibrations-0804, 2014.

[Sloo16] Peter M.A. Sloot, Big Nonsense: the end of scientific thinking is near…, Peter-Sloot.com,  http://www.peter-sloot.com/blog/big-nonsense-the-end-of-scientific-thinking-is-near, 2016.

[STAN13] Stanford Simulation, Computer Simulations in Science, Stanford Encyclopedia of Philosophy, https://plato.stanford.edu/entries/simulations-science/, 2013. Accessed on: 29-10-2018.

[Zaha10] M. Zaharia, M. Chowdhury, M.J. Franklin, S. Shenker and I. Stoica, Spark: cluster computing with working sets, HotCloud, Vol. 10 (10-10), pp. 95, 2010.

Horizontaal Toezicht als basis van vertrouwen in de zorgketen

De invoering van Horizontaal Toezicht in ziekenhuizen lijkt een additionele controlesystematiek te zijn, waardoor de ziekenhuizen nog meer tijd kwijt zijn aan controles, in plaats van aan patiëntenzorg. In dit artikel laten de auteurs een ander werkend perspectief zien. Niet de controles zouden leidend moeten zijn, maar uitgangspunten als de wil om te verbeteren, gemeenschappelijke verantwoordelijkheid en de evenwichtige balans tussen harde en zachte controles.

Inleiding

In 2017 publiceerde Compact een artikel over de invoering van Horizontaal Toezicht in Nederlandse ziekenhuizen [Uter17]. In dit artikel werd een pleidooi gehouden om IT een prominentere rol te laten spelen, zodat ook echt van de voordelen van Horizontaal Toezicht kan worden geprofiteerd. Een jaar later zijn vrijwel alle ziekenhuizen gestart met projecten op het gebied van Horizontaal Toezicht, sommige zullen op korte termijn hiermee starten. De stimuleringsregeling van de zorgverzekeraars heeft hier zeker aan bijgedragen. Ook instellingen in de geestelijke gezondheidszorg gaan vanaf komend jaar actief met Horizontaal Toezicht aan de slag.

De vraag die in dit artikel aan de orde komt, is op welke wijze Horizontaal Toezicht succesvol kan worden ingevoerd. Horizontaal Toezicht is geen opzichzelfstaand project, en niet gericht op nog meer controles. Dit artikel besteedt aandacht aan de keten van waaruit Horizontaal Toezicht moet worden bezien, alsmede de hiermee gepaard gaande risico’s. Vanuit deze keten en risico’s wordt duidelijk gemaakt hoe je risico’s kunt beheersen, zonder ‘nog meer’ controles te implementeren om Horizontaal Toezicht in te voeren. Ten slotte kijken we vooruit en presenteren we drie principes die gehanteerd kunnen worden bij de implementatie van Horizontaal Toezicht.

Horizontaal Toezicht in de keten

Een belangrijke succesfactor voor het slagen van een dergelijk project is dat een ziekenhuis niet alleen verantwoordelijk is voor de succesvolle implementatie van Horizontaal Toezicht, maar dat de gehele keten waarin Horizontaal Toezicht plaatsvindt, hierbij moet worden betrokken. In elk deel van de keten kunnen processen die bijdragen aan Horizontaal Toezicht uitbesteed zijn. In dit artikel maken we inzichtelijk hoe deze keten eruitziet, welke processen er waar in de keten worden uitgevoerd, welke risico’s er zijn, maar vooral ook hoe je de bijkomende risico’s kunt beheersen.

Het uitbesteden van processen, en daarmee het creëren van een keten, heeft een directe impact op de inrichting van het risicomanagement van een organisatie. Voor het management of bestuur is een directe controle op de uitbestede processen niet mogelijk; er zal dus een andere vorm van beheersing moeten worden ingericht om zekerheid te krijgen over de betrouwbare werking van de uitbestede processen.

Kijkend naar de keten binnen Horizontaal Toezicht, dan begint het vaak bij de zorgverzekeraar. Deze partij voert doorgaans een groot aantal controles uit van de gefactureerde zorg. De ontwikkeling in het Nederlandse zorglandschap is dat deze controles van de geleverde en gefactureerde zorg door ziekenhuizen in toenemende mate worden uitgevoerd door de zorgaanbieder, waarna de zorgverzekeraar hierop moet vertrouwen. Hiermee is de zorgverzekeraar de eerste schakel in de keten.

Op het moment dat de zorgverzekeraars beter kunnen vertrouwen op de betrouwbaarheid van de registratie en facturatie van de zorginstelling, kan worden afgesproken dat de instelling verantwoording aflegt over deze betrouwbaarheid, door middel van een ‘serviceorganisatiecontrole’-rapport (hierna: assurance-rapport). Een dergelijk rapport is toegesneden op de risico’s van de zorgverzekeraar, en richt zich op de beheersingsmaatregelen die de instelling heeft ingericht.

Omdat zorginstellingen hun (IT-)processen steeds vaker hebben uitbesteed, wordt een deel van de door de zorginstellingen uitgevoerde controles daarmee ook uitbesteed. De zorginstelling heeft daardoor een aantal risico’s niet in eigen beheer. Vaak heeft men daar beperkte ‘servicelevel’-afspraken over gemaakt met de leveranciers. Als de risico’s, die zich buiten de muren van de zorginstelling bevinden, worden meegenomen in het totale risicomanagement van een zorginstelling, zien wij dat de volwassenheid en beheersing van de interne organisatie groeit. Hierdoor kunnen de doelen van Horizontaal Toezicht beter worden bereikt. Horizontaal Toezicht is daarom niet alleen de verantwoordelijkheid van de zorginstelling en zorgverzekeraar, maar van de hele keten. De schakels in de keten moeten, net als de zorginstellingen, verantwoording afleggen. Ook dit kan middels een assurance-rapport, waardoor er een keten van assurance ontstaat.

Horizontaal Toezicht Zorg

Zorgverzekeraars controleren in een hoge mate van detail de zorgdeclaraties van zorginstellingen. Veel van deze zorgdeclaraties worden ter correctie teruggestuurd naar de instellingen. Dit leidt tot veel correctiewerkzaamheden bij de zorginstellingen, en zorgt daarmee voor een hoge administratieve lastendruk. Op dit moment is er daardoor binnen de instellingen behoefte aan meer controle aan de voorkant, en moet er worden gewerkt aan het vertrouwen tussen de zorgverzekeraar en zorginstellingen.

Horizontaal Toezicht Zorg is opgezet om die relatie te verbeteren. Het concept, naar analogie van Horizontaal Toezicht bij de Belastingdienst, is voor het eerst geïntroduceerd in de zorgsector in 2016, primair in de medisch specialistische zorg (MSZ). In 2018 is Horizontaal Toezicht ook in de Geestelijke Gezondheidszorg (GGZ) gelanceerd. Het is een initiatief van drie partijen: de Nederlandse Vereniging van Ziekenhuizen (NVZ), Zorgverzekeraars Nederland (ZN) en de Nederlandse Federatie van Universitair Medische Centra (NFU).

Door Horizontaal Toezicht Zorg worden de zorginstellingen uitgedaagd om ‘first time right’ correct te registreren en declareren. Het beoogde voordeel is dat de zorgverzekeraar zo minder hoeft te controleren, en de zorginstellingen dus minder hoeven te corrigeren.

Risicomanagement in de keten

Een keten ontstaat wanneer een zorginstelling voor de registratie, facturatie van de geleverde zorg en de controle hiervan, diensten afneemt bij een serviceorganisatie (een leverancier van diensten). Een vergelijkbaar voorbeeld is de cateraar van het ziekenhuis die het bereiden van de warme maaltijden laat uitvoeren door een andere organisatie. De keten bestaat dan uit het ziekenhuis, dat formeel gezien het leveren van een warme maaltijd voor de patiënten heeft uitbesteed aan een cateraar, en de cateraar zelf, die op haar beurt het bereiden van de warme maaltijd ook weer heeft uitbesteed.

In het proces van het registreren en factureren van geleverde zorg werkt dit op dezelfde wijze. Zoals in de inleiding is geschetst, kan een zorginstelling gezien worden als een serviceorganisatie door de zorgverzekeraar, voor wat betreft het uitvoeren van de zorg, en de controles van correcte registratie en facturatie hiervan. De zorginstelling heeft vaak op haar beurt IT-processen uitbesteed aan leveranciers. Je kunt hierbij denken aan het hosten en onderhouden van het ZIS (Ziekenhuis Informatie Systeem) en het EPD (Elektronisch Patiënten Dossier), of de toepassing van tools om controles uit te voeren van onjuistheden in de registratie. Vervolgens kan het zo zijn dat de IT-leverancier de servers en dataopslag heeft uitbesteed aan een externe provider. De Horizontaal Toezichtketen ziet er dan uit zoals afgebeeld in Figuur 1.

C-2018-3-Kleef-01-klein

Figuur 1. Horizontaal Toezichtketen. [Klik op de afbeelding voor een grotere afbeelding]

Waar processen worden uitbesteed, neemt de risicodeling toe. De zorginstelling blijft als organisatie echter eindverantwoordelijk. Het management of bestuur van de zorginstelling moet bepalen hoe de risico’s in de keten zich tot elkaar verhouden, en in welke mate deze risico’s worden beheerst. Naast het identificeren en inschatten van deze risico’s is een andere belangrijke stap het bepalen van de ‘risk appetite’. Dit is de afweging tussen de risico’s die de instelling wil afdekken en de (rest)risico’s waarvan de instelling bereid is om deze te accepteren. Hierbij dient de instelling rekening te houden met de uiteindelijke gebruiker/afnemer.

De risico’s die de instelling onderkent en wil (laten) afdekken, dienen te worden gecommuniceerd binnen de keten. De serviceorganisaties gaan dan aan de slag met maatregelen omtrent deze risico’s, en vervolgens communiceren zij in welke mate het betreffende risico is afgedekt binnen de keten. Risk appetite is daarbij niet alleen van belang voor de mate van diepgang van de interne beheersing, maar speelt ook een belangrijke rol in het evalueren van het restrisico, of het risico dat ontstaat wanneer de interne beheersing onvoldoende effectief is.

Concreet betekent dit, wanneer een risico en de bijbehorende risk appetite tussen ketenpartijen onvoldoende wordt afgestemd, dat:

  • de mate van beheersing door de uitbestedingspartijen niet aansluit op de verwachtingen van de gebruiker, waardoor risico’s niet of onvoldoende worden afgedekt (effectiviteit), of juist te diepgaand worden afgedekt (inefficiënt);
  • het restrisico, of het risico uit bevindingen van ineffectieve beheersing, onjuist wordt geëvalueerd. Hierdoor wordt het verwachte risico verderop in de keten groter, en daarmee bestaat de kans dat de beheersingsmaatregelen onvoldoende aansluiten bij het te verwachten risico.

In de keten zijn zogenaamde ‘hand-over’-momenten aanwezig. Hierbij zijn vooral de afspraken en afstemmingen van belang. Zo kan het zijn dat de ene partij denkt dat de ander het risico afdekt, en vice versa. Wie neemt waarvoor verantwoordelijkheid en wie dekt welk risico af? Het overzien van de risico’s en het controleraamwerk binnen de gehele keten is daarbij van belang. In service-level-agreements (SLA’s) kunnen hier concrete afspraken over worden gemaakt.

Begrippen

  • Serviceorganisatie: de organisatie die de diensten levert.
  • Gebruiker: de organisatie die de diensten afneemt, en daarmee de organisatie die gebaat is bij het assurance-rapport.
  • Assurance-provider: de externe auditor.
  • Assurance-rapport: het eindproduct van de assurance-provider, na het uitvoeren van een assurance-onderzoek; hierin staat het onafhankelijke oordeel van de assurance-provider beschreven, met betrekking tot de beheersingsmaatregelen en -doelstellingen van de serviceorganisatie.

C-2018-3-Kleef-02-klein

Figuur 2. Risicobeheersing binnen de gehele Horizontaal Toezichtketen. [Klik op de afbeelding voor een grotere afbeelding]

Daarnaast kan het zijn dat bepaalde risico’s niet in hun geheel door een van de organisaties in de keten kan worden afgedekt, maar dat de serviceorganisatie beschrijft welke maatregelen de eindgebruikersorganisatie moet nemen om het risico af te dekken. Deze maatregelen worden de zogeheten ‘complementary user entity controls’ (CUEC’s) genoemd, en maken onderdeel uit van de assurance-rapportage. In het Nederlands zijn deze maatregelen beter bekend als de ‘aanvullende beheersingsmaatregelen’ voor de gebruikersorganisatie. Dit zijn beheersingsmaatregelen die de eindgebruiker moet inrichten om de beoogde beheersingsdoelstellingen, die in het assurance-rapport zijn opgenomen, te bereiken.

Het is belangrijk dat je als gebruiker gedegen kennis neemt van een assurance-rapportage, en daarbij let op de volgende aspecten: het type assurance-rapport, de mate van zekerheid, verslagperiode, scope van het rapport, de doelgroep, service-auditor, eventuele bevindingen over de beheersingsdoelstellingen, de opinie, eventuele afwijkingen omtrent de beheersingsmaatregelen, en eventuele CEUC’s.

In het voorbeeld van de cateraar en bereider van de warme maaltijden is de beheersingsdoelstelling van de bereider dat de maaltijden hygiënisch worden bereid, volgens de geldende hygiënenormen in de horeca. Op het moment dat de bereider de maaltijden overdraagt aan de cateraar (het ‘hand-over’-moment) kan deze echter niet meer verantwoordelijk zijn voor de hygiëne. De cateraar zal dan een ‘CUEC’ moeten inrichten, om zo te borgen dat de bereider alle maaltijden hygiënisch verwarmt en uitserveert. De patiënt in het ziekenhuis, die de maaltijd uiteindelijk opeet, vertrouwt er immers op dat de maaltijd bijdraagt aan het proces van aansterken en herstel. Ook hier geldt dat honderd procent zekerheid niet te garanderen is, maar de patiënt heeft wel een redelijke mate van zekerheid dat de warme maaltijd voldoet aan de hygiëne-eisen. In [Beek17] is aangegeven op welke wijze de gebruikersorganisatie invulling kan geven aan het interpreteren en opvolgen van de CEUC’s uit een assurance-rapport.

Met assurance-rapporten kunnen partijen in de keten elkaar inzicht geven in de mate waarin ze de interne beheersing van de overeengekomen risico’s hebben ingericht en uitgevoerd. Door onafhankelijke assurance wordt vertrouwen gecreëerd op basis van ‘aantoonbaarheid’. Een assurance-rapport over de interne beheersing, getoetst door een onafhankelijke auditor of accountant, levert een onafhankelijke evaluatie op over de mate waarin de beheersingsmaatregelen worden uitgevoerd, en de beheersingsdoelstellingen (omgekeerd geformuleerde risico’s) die daarmee worden behaald.

Interne beheersing en de externe toetsing daarvan kunnen geen volledige zekerheid geven. Zowel bij de uitvoering van de interne beheersing als het toetsen ervan worden toleranties gehanteerd, en er wordt gewerkt met deelwaarnemingen als afweging van de betrouwbaarheid en kosten die samenhangen met het inrichten van de interne beheersing en externe controle hiervan. Een assurance-rapport van een onafhankelijke auditor levert daarom een redelijke mate van zekerheid op. Voor het vertrouwen tussen de zorgverzekeraar en zorginstelling (en haar uitbestedingspartijen) is daarom meer nodig.

Vertrouwen heeft de overhand

Om als zorginstelling ‘in control’ te zijn en de risico’s in de keten te beheersen, is het noodzakelijk om ‘harde’ beheersingsmaatregelen te nemen en hierover verantwoording af te leggen via assurance-rapportages. Uiteindelijk zorgen meer regels en richtlijnen echter voor een kleiner verantwoordelijkheidsgevoel bij individuen. Dit resulteert in een minder effectieve beheersing, door middel van de ‘harde controles’ ([KPMG09]).

Echt effectief worden deze ‘harde’ maatregelen pas in een omgeving waar ook aandacht is voor ‘zachte’ elementen, zoals de onderlinge relaties, afspraken, vertrouwen en het credo ‘fouten maken mag’. Daarom is er steeds meer aandacht voor zachte beheersingsmaatregelen (‘soft controls’). Deze zachte kant kan ondersteund worden door het toepassen van de negen ‘Trust Rules’ ([KPMG09]).

In Figuur 3 zijn deze ‘regels’ gevisualiseerd. Een succesvolle implementatie van Horizontaal Toezicht binnen een zorginstelling bestaat dus niet alleen uit het implementeren van ‘harde’ beheersingsmaatregelen. Expliciete aandacht voor de ‘zachte’ kant is onontbeerlijk.

C-2018-3-Kleef-03-klein

Figuur 3. De negen Trust Rules ([KPMG09]). [Klik op de afbeelding voor een grotere afbeelding]

Ziekenhuizen vinden aandacht voor relaties, afspraken en onderling vertrouwen ook belangrijk, en laten bijvoorbeeld steeds vaker een zogenaamde ‘soft control-scan’ uitvoeren. Hierdoor wordt inzicht verkregen over de ‘zachte’ kant van de risicobeheersing. Het is van belang dat hierbij niet alleen de administratieve afdeling van een ziekenhuis wordt betrokken, maar juist ook de medisch specialisten, verzekeraar en softwareleverancier.

Voor het implementeren van de negen ‘Trust Rules’ adviseren wij de zorginstellingen om met de ketenpartijen in gesprek te gaan, en gezamenlijk concrete soft controls te definiëren. Openlijk communiceren over de soft controls leidt tot begrip en bewustzijn bij elke ketenpartij, en daarmee tot een grotere kans dat de Trust Rules in de gehele keten worden nageleefd.

Concrete voorbeelden die bij de implementatie van Horizontaal Toezicht naar voren komen, zijn:

  • Er is ruimte om fouten te maken, en deze worden met de zorgverzekeraar besproken (trust rule: durf te experimenteren en leer van ervaringen).
  • Het maken van afspraken met de EPD-leveranciers over de beheersingsdoelstellingen, en deze concreet maken in een Service Level Agreement (trust rule: definieer gezamenlijke doelstellingen en bouw vertrouwen op met goede regels).
  • In Service Level Rapportages wordt informatie opgenomen over de afgenomen diensten en kwaliteit van de uitvoering (trust rule: zet in op geïnformeerd vertrouwen, niet op blind vertrouwen). Dit is een gezamenlijk initiatief van de zorginstelling en softwareleverancier.
  • De zorgverzekeraar en zorginstelling komen periodiek bij elkaar (trust rule: maak contact persoonlijk), zodat de geconstateerde fouten en tekortkomingen worden besproken (trust rule: geef elkaar verantwoordelijkheid en vertrouwen), met als doel transparantie en verbetering.

Toekomst voor assurance in de keten van Horizontaal Toezicht?

Samenvattend is een keten van partijen gezamenlijk verantwoordelijk voor het realiseren van een betrouwbare registratie en facturatie (rechtmatigheid). De zorginstellingen staan aan het begin van een tijdperk, waarbij het aan de ene kant ‘normaal’ is dat alleen rechtmatige zorg wordt gefactureerd, en dit aan de andere kant een gemeenschappelijke verantwoordelijkheid is van alle ketenpartijen. De keten in beeld brengen, risico’s onderkennen en goede afspraken maken over wie deze risico’s afdekt, maken daar onderdeel van uit. In combinatie met de zogenaamde ‘trust rules’ leidt dit tot het vertrouwen dat die rechtmatigheid van de zorg ook wordt gerealiseerd, zonder dat dit leidt tot een wedloop aan risico’s, maatregelen en controles.

Het succesvol invoeren van Horizontaal Toezicht binnen een zorginstelling is wat ons betreft geen opzichzelfstaand project met als doel om meer harde controles in te voeren. Horizontaal Toezicht is een manier om een organisatie te besturen, en gestoeld op drie belangrijke principes:

  1. de wil van de organisatie om de administratieve processen te verbeteren;
  2. het besef dat dit alleen kan door binnen de keten samen te werken met de zorginstelling, zorgverzekeraar en bijvoorbeeld softwareleverancier;
  3. een goede balans tussen de harde en zachte kant van controlemaatregelen, gebaseerd op de ‘Trust Rules’.

In de ideale situatie waarin Horizontaal Toezicht optimaal is gerealiseerd, worden hierdoor de administratieve lasten en controledruk, waar zorginstellingen nu mee te maken hebben, ook echt verlicht.

Het is zinvol om vanuit deze drie principes Horizontaal Toezicht op te zetten en te borgen, waarbij alle partijen in de keten verantwoordelijkheid nemen, waardoor duidelijk wordt wie waarvoor verantwoordelijk is en het totale vertrouwen in de gehele keten toeneemt. We vinden het belangrijk om juist bij de start van het implementatieproces van Horizontaal Toezicht te kiezen voor een balans van harde en zachte maatregelen, in het kader van de rechtmatigheid. Inmiddels wordt er nagedacht over fase 2 en 3 van Horizontaal Toezicht, namelijk het ‘gepast gebruik’ en de ‘doelmatigheid’ van zorg. Op het moment dat er voor fase 1 een fundament ligt van vertrouwen, zal Horizontaal Toezicht uiteindelijk ook bijdragen aan de uitbreiding naar ‘gepast gebruik’ en ‘doelmatigheid’.

Literatuur

[Beek17] Drs. J.J. van Beek RE RA en ir. R.P.A.C. van Vught, User control considerations voor ISAE3402-assurancerapportages. Ruimte en noodzaak voor verbetering, Compact 2017/3, https://www.compact.nl/articles/user-control-considerations-voor-isae3402-assurancerapportages/.

[KPMG09] KPMG, Trust Rules. Nine principles for a better balance between rules and trust, KPMG Audit Tax Advisory, http://www.ethicsmanagement.info/content/Whitepaper%20Trust%20Rules%20ENG.pdf, 2009.

[Uter17] Ing. D.R. Utermark RE and E. Tsjapanova MSc, Horizontal Monitoring. The future of medical invoicing, we are getting there, Compact 2017/3, https://www.compact.nl/articles/horizontal-monitoring/.

Dynamic Risk Assessment

The traditional risk assessment models, which assess risks based on their individual impact or likelihood have been widely applied by many organizations. The existing models, however, fail to recognize the interconnections among the risks which may reveal enhanced assessment dimensions and more pertinent risk mitigating actions. In response to this, the Dynamic Risk Assessment (DRA) has been developed based on proven scientific modeling, expert elicitation, and advanced data analytics. DRA enables organizations to gain a deeper understanding into how risks impact the different parts of the firm and, subsequently, to design more effective and efficient risk mitigating measures.

Introduction

One of the main things learned in risk management since the new millennium is the previously unobserved levels of correlation. We have since learnt that volatility itself is volatile, an attribute for which most financial mathematical models make insufficient allowance. It re-introduced the question of whether structural breaks in the system exist and how to allow for their presence in modeling, impairment assessments and other financial valuation techniques.

The above developments pose ominous warnings for risk and financial managers alike as they infer that risk assessments and asset valuations can drift to levels that, in certain cases, grossly underestimate risks and can cause valuations to gyrate violently.

Unless business, academic institutions and regulators get better at managing these cycles and corrections, businesses will be subjected to ever increasing public scrutiny, more intrusive regulation and regulators, new-found antagonistic behavior from the public, reduced market capitalization and greater friction costs in doing business.

Whilst this plays to a populist agenda, it does little to improve economic growth, which we have now seen constitutes the tide that most lifts most personal wealth boats. Risk management has a crucial role to fulfill: not only within the business, but also for its immediate and wider stakeholders.

The Dynamic Risk Assessment Approach

In response KPMG has developed an innovative approach referred to as ‘Dynamic Risk Assessment’ (DRA). DRA is based on the science of expert elicitation, invented by the US military in the 1950s. At the time it faced the challenge of remaining ahead of Soviet military developments that were taking place behind an impenetrable iron curtain. The military, similar to risk managers today, faced a future that could not be credibly modeled by traditional means. They quickly learned that expert elicitation is a helpful alternative, to the point where it aided the US military not only to match covert Soviet developments, but to stay abreast throughout the Cold War and thereafter.

More specifically, the US military discovered that, by (1) identifying experts scientifically and (2) conducting scientifically structured individual and group interviews, a credible future threat/risk landscape could be generated. DRA capitalizes on these insights and extends them into a third and fourth dimension: the experts are requested to provide their individual perspectives on how the risks can be expected to trigger or exacerbate each other, and the velocity with which they can affect the organization. With these risk perspectives an organization-specific risk network can be constructed to obtain key insights into the organization’s systemic risk landscape.

These insights are presented back to the experts to obtain their views on the consequences and the opportunities available to the organization, whereupon it is circulated to Those Charged With Governance to enrich their risk mitigation decision-making.

C-2018-2-Bolt-01-klein

Figure 1. The four steps of Dynamic Risk Assessment. [Click on the image for a larger image]

Dynamic Risk Assessment explained

Clustering of risks

Traditional risk models assess the prioritization of risks on individual impact or likelihood. Although these assessments are useful, the traditional model (Figure 2) fails to recognize the interconnections between risks (Figure 3) or the effect of clustering risks. As illustrated in Figure 4, a seemingly low ‘risk of failure to attract talent’ could potentially form part of a high severity risk cluster, that of operational risks.

C-2018-2-Bolt-02-klein

Figure 2. Traditional depiction of risks (illustrative). [Click on the image for a larger image]

C-2018-2-Bolt-03-klein

Figure 3. Individual risks in a risk neural network, with clusters (illustrative). [Click on the image for a larger image]

C-2018-2-Bolt-04-klein

Figure 4. Aggregated likelihood and severity of a cluster of risks (illustrative). [Click on the image for a larger image]

Risk Influences

DRA also calculates influences between risks, i.e. to what extent will the occurrence of one risk trigger the occurrence of other risks, and vice versa. In this manner the three most influential risks can be identified (Figure 5). These are the risks that, when they occur, will trigger most of the other risks – across the network.

C-2018-2-Bolt-05-klein

Figure 5. The systemically most influential risks in the network (illustrative). [Click on the image for a larger image]

Similarly, the three most vulnerable risks can be identified (Figure 6). These risks are the risks most likely to occur following the occurrence of any other risks in the network.

C-2018-2-Bolt-06-klein

Figure 6. The systemically most vulnerable risks in the network (illustrative). [Click on the image for a larger image]

Knowing the contagion forces between risks is important in a) selecting the key risks to focus on, and b) selecting the appropriate controls (in type and strength) to mitigate them. For example, since the most influential risks can trigger other risks, mitigation of the organization’s systemic risks should commence with these risks. For the most vulnerable risks, preventive controls should be preferred over detective controls.

Figure 7 depicts a consolidated view of individual risks, risk clusters, most influential and most vulnerable risks. Based on this view, we can design risk mitigating activities and assign related governance responsibilities.

C-2018-2-Bolt-07-klein

Figure 7. Putting it together (illustrative). [Click on the image for a larger image]

Risk mitigation

Risk mitigation is aimed at addressing how a particular risk should be optimally responded to. Within DRA, mitigation is accomplished through the application of bow ties. The first step for risk mitigation therefore begins with determining the clusters and calculating their aggregate severities (Figure 4). This process is followed by the identification of the most influential and most vulnerable risks, and thereafter the black swans – risks that display weak links to other risks yet, in aggregate, have catastrophic outcomes. DRA’s mitigation phase identifies the most vulnerable risks as well as risks that could form part of a black swan chain, and assigns these to the CRO as these risks have the gravest systemic consequences.

In identifying the most influential risks, responsibility for monitoring is allocated to the CEO since these risks have the widest systemic reach. The CEO can then be challenged to invert them into competitive advantages. Risks that are individually insignificant and not connected to any significant outcomes are delegated to subordinates.

Data security, for instance, is classified in Figure 7 as a risk that carries a high severity individually, and forms an Operational Risk Cluster with significant aggregate outcomes together with profitability, conduct risk, and failure to attract talent. The risk mitigating measures for data security can subsequently be designed as shown in Figure 8. The diagram shows various related (external and internal) threats, key controls (preventive, detective, and recovery, with their current status of effectiveness) and the risk consequences. For each key control responsibilities can be allocated across the three lines of defense – ownership lies with the first line, supervisory roles with the second line, and evaluations on the third line. The frequency of reporting back to Those Charged With Governance is determined based on the criticality of control and significance of risk.

C-2018-2-Bolt-08-klein

Figure 8. Data security bow tie (illustrative). [Click on the image for a larger image]

Conclusion

Traditional models assess risks based on their individual impact or likelihood, but fail to recognize the interconnections among the risks. In response to this, KPMG designed the Dynamic Risk Assessment based on proven scientific modeling, expert elicitation and advanced data analytics. The Dynamic Risk Assessment enables users to gain a deeper understanding into how risks impact the different parts of the organization and, subsequently, to the design of more effective and efficient risk mitigating measures.

Autonomous driving cars are key for mobility in 2030

Due to mind-blowing technological developments in the automotive industry and changes in customer behaviour, mobility in 2030 will be dramatically different. The autonomous driving car enables highly optimised transportation and asset utilisation. Autonomous driving depends on decision making without human intervention, which is made possible by algorithms and Artificial Intelligence (AI). These algorithms and AI enable direct and automated access to mobility platforms for logistic planning and real time interaction with objects and vehicles. As a result, new business models will arise and fulfil new and sometimes untapped customer needs. The how we move is more and more facilitated by algorithms, based on our previous or expected behavior, external triggers and/or events. The actual working of these algorithms should not be a black box and therefore assurance on algorithms will be an item of interest in the coming years.

Introduction

The automotive industry is at the forefront of a radical digital transformation. The introduction of digital technologies impacts the industry in many different ways. Three forces will fundamentally transform how we move people and goods in the future, as can be seen in Figure 1 below.

  • Firstly, electrification: basically all conventional power sources will be replaced by electric powertrain technologies that will enable sustainable transport with zero emissions. Battery capacity will rapidly become more affordable and various alternatives for the conventional lithium-ion batteries are being introduced.
  • Secondly, the rise of ‘mobility-as-a-solution‘ services that includes the mind shift from ‘ownership’ to ‘usage’ of the car. Platform-based business models are optimising underutilisation amongst car owners, connecting the various forms of transport and offering a one-stop-shopping concept.
  • Thirdly, and most importantly, is the technological breakthrough of autonomous driving cars. Currently, many new car models are being introduced with these new driving features. Initially, incorporated into Advanced Driver Assistance Systems that provide ‘autopilot’ functionalities, but eventually with an increasing level of autonomy.

C-2017-4-Groen-01-klein

Figure 1. Overview of mobility ecosystem changes ([KPMGUK17]).

The future mobility ecosystem is very different

The mobility ecosystem will look very different in the future, but why? The current ‘one-car-one user’ is very inefficient and costly as most of the time a car is not used. The total fleet of passenger cars in the Netherlands consists of approximately 8,3 million vehicles ([BOVA17]). Most of them (approximately 95% according to [Morr16] sit idle each day and are parked. People spend a lot of their time in traffic jams ([ANWB17]). The increasing urbanisation will increase the time spent in traffic. In addition, the current TCO (Total Cost of Ownership) for cars is an important component of the total household budget. In summary, the current one-car-one-user model results in underutilised assets. Therefore, the current model makes the car an expensive form of transportation.

The autonomous vehicle (AV) might be the main reason that the ownership model will shift from one-car-one-user to one-car-multiple-users. These new user communities do not care who owns the car, but see the transportation from A to B as a service. Therefore, the current car ownership model will evolve to a ‘mobility-as-a-service’ model. Mobility subscriptions to car share schemes provide consumers with more convenience, clear costs and fixed rates and kilometre packages. In addition, it solves looking for a parking place in front of your house in a fully congested city! As a driving licence might no longer be needed for these new vehicles, the number of potential users is almost unlimited. New user groups are being tapped. People who were not able to use a car for transportation, now have the opportunity to use autonomous cars as a passenger as they do not need to be able to drive the car itself. Therefore, the growth of travelled kilometres per person will primarily take place in the 0-30 years old segment and the 50 and older segment ([KPMGUK17]). The AV will provide more independence for these segments. However, as these new solutions require high volumes of users and a large infrastructure (assets, technology, connectivity and data analytics), a consolidation of mobility service providers is likely. Service aggregators are a single point of contact for consumers that provide the solution to integrate the required components for autonomous driving. In summary, the autonomous car will enable:

  • lower costs per kilometre (no driver costs, new energy sources, benefits from economies of scale and longer vehicle lifetime);
  • more efficiently used vehicles (up to five times the number of kilometres compared to human operated cars);
  • tapping new markets with potential users who previously did not own a car, or were not able to use a car for transportation;
  • major ecosystems to consolidate (therefore reduce in number) and the type of ecosystem player will change as new industries are entering the market.
Autonomous driving will reinvent the value chain, new platform based business models will be introduced

Next to a transportation object, the autonomous car is also more and more a platform that enables extra digital revenues, which will become a very important source of income for the automotive industry. In the Global Automotive Executive Survey of 2017 [KPMG17], 85 percent of the nearly thousand car executives from 42 countries surveyed were convinced that their company will realize a higher turnover through a digital ecosystem (up to ten times). In this ecosystem, payments might become a very important new activity to enable automated interactions between cars and objects, for example to pay for charging the battery at a public charging point. To be successful, companies in the mobility industry will change. Platform organisations are the ones that will actually change the automotive industry. These organisations are mainly driven by data and algorithms. As can be seen in Figure 2 below, the rules of the game for platform business models are significantly different.

C-2017-4-Groen-02-klein

Figure 2. How to be successful with platform business models ([KPMGNL17-1], [Alst16], [Chou15], [Evan16], [Tiwa13]).

The autonomous driving car is coming, developments are accelerating

Although we are still talking in the future tense, the technology is already available. Start-ups, for instance, increasingly compete to mass-produce next-gen radar and LiDAR systems (3D road images) to enable autonomous driving. Original Equipment Manufacturers (OEM) are introducing these new technologies in their products, as can be seen in Figure 3 below. Many services have been launched and/or are gradually becoming available. Some of the ‘flagship events’ we have listed, indicate a continuous and accelerating introduction of developments with respect to autonomous driving. The autonomous driving car is increasingly becoming mature, and the industry is very active. In short, in 2017:

  • Ford acquired dynamic-shuttle firm Chariot (a mobility integrator);
  • General Motors acquired Sidecar (sharing a ride as a new transportation service);
  • Volkswagen invested in cab-hailing start-up Gett (on demand mobility as counterpart to taxi app Uber);
  • Europcar acquired car-sharing start-up Bluemove (car sharing and hourly rental service);
  • GM bought Cruise Automation (autonomous driving tech firm);
  • Ford acquired Pivotal, a data analytics company to gain valuable customer insights;
  • Waymo, the former Google self-driving car project in 2009, is ready for the next phase: their driverless transport services will soon be introduced to the public;
  • Audi introduced their first production car (the new A8) with level-3 autonomous driving functionality in September 2017.

C-2017-4-Groen-03-klein

Figure 3. KPMG UK Mobility 2030 Scenario Analysis – Stretch Case ([KPMGUK17]).

Fully accelerating, or do we anticipate short circuits and power outages?

As depicted above, technology developments are accelerating, and car manufacturers design new computers and algorithms at an increasing pace. But are we pushing this, or do we really understand what customers really value? Do we have the right skill set to integrate/aggregate other ecosystem elements? If all technology is readily available, what is limiting the speed of developments?

  • change of regulations (approval from European and national level to drive autonomously);
  • new insurance models (shift of accountability from driver to car);
  • privacy regulations (possibly limit the transfer of data and usage of data by platforms to offer personalised services);
  • service integrators and providers that take full responsibility for the whole value chain;
  • the roll out of 5G mobile connection technology (specifically the required short latency times) that enables communication between vehicles;
  • smart infrastructure with digital traffic beacons and road signs (to avoid misinterpretations of the algorithms);
  • local area networks standard (cross brand communication);
  • customer adaption.

The autonomous car must be a ‘good robot’

In autonomous cars, the importance of algorithms and the way of checking these ‘digital brains’ should not be underestimated. As often the possibilities are endless, but do we still understand the choices that are made by technology?

We have seen an increasing number of applications where humans depend on algorithms. The autonomous car has to make ethical decisions, based on algorithms that have been developed by IT specialists. How do we know the car makes the right decision?

As we already explained in our report The connected car is here to stay [KPMGNL17-2], these topics will be increasingly important for safeguarding consumer trust. An example is the working of a navigation system. To guide the driver from A to B, it is essential that the system meets the following conditions:

  • the quality of the data needs to be good;
  • the algorithm, basically the instructions given, must be correct;
  • the route advice should serve the interests of the driver; it should be an independent advice of course, it should not be the case that the algorithm has a preference for a route along a particular brand of fuel stations or shopping centres.

Conclusion

Autonomous driving is reinventing the value chain and new platform based business models will be increasingly introduced. Next to a transportation object, the autonomous car is also more and more a platform that enables extra digital revenues, which will become a very important source of income for the automotive industry. However, the importance of the algorithms and the way of checking these ‘digital brains’ should not be underestimated.

Although accountants primarily evaluate financial statements and annual reports, the accountancy sector can certainly play a key role in this game. Assurance by accountants has been a proven method in business to provide confidence and trust to users. Accountants have the capabilities to provide ‘assurance of systems’ about the accuracy of the processes and algorithms. This requires a considerable investment in a rapidly changing market. However, public trust in this modern digital society should be handled with care, as this trust could very well be the bottleneck for a quick adoption of the autonomous driving vehicle. Might this public trust be key for a fast, digital transformation to the new mobility ecosystem?

The Netherlands is ready for the autonomous vehicle

The Dutch ecosystem for the autonomous vehicle is ready. The intensively used Dutch roads are very well developed and maintained. Many different large road construction projects have been finished during recent years ([RIJK17-1]). Other indicators, like the telecom infrastructure, are also very strong, as can be seen in the yearly OpenSignal top 10 for 4G coverage footprint [OPSI17], in which the Netherlands holds a high position.

In addition, the Dutch Ministry of Infrastructure has opened the public roads to large-scale tests with self-driving passenger cars and lorries. Since 2015, the Dutch rules and regulations have been amended to allow large-scale road tests ([RTL17], [TNO17]).

In collaboration with the federal road authority (Rijkswaterstaat), and the ministry of Infrastructure, the federal vehicle authority (RDW) has the option of issuing an exemption for self-driving vehicles ([RIJK17-2]). Companies that wish to test self-driving vehicles must first demonstrate that the tests will be conducted in a safe manner. To that end, they need to submit an application for admission.

Public/private partnership do further accelerate the development of automotive expertise and innovation capacity. Strong examples are the Automotive High Tech Campus in the Eindhoven area and the connected TU Eindhoven university ([TUEI17]), which has a specific Smart Mobility faculty. Start-ups such as Amber Mobility ([TECR17]) are challenging existing parties and are broadening existing beliefs and behaviours.

References

[Alst16] M.W. Alstyne, S.P. Choudary, G.G. Parker, Platform Revolution: How Networked Markets Are Transforming the Economy and How to Make Them Work for You, W.W. Norton & Co., 2016.

[ANWB17] ANWB, Filegroei zet door in 2016, ANWB.nl, https://www.anwb.nl/verkeer/nieuws/nederland/2016/december/filezwaarte-december-2016, 2017.

[BOVA17] BOVAG, Mobiliteit in cijfers, BOVAGrai.info, http://bovagrai.info/auto/2016/bezit/1-2-ontwikkeling-van-het-personenautopark-en-het-personenautobezit/, 2017.

[Chou15] S.P. Choudary, Platform Scale: How an emerging business model helps start-ups build large empires with minimum investment, Platform Thinking Labs, 2015.

[Evan16] R.S. Evans, R. Schmalensee, Matchmakers: The New Economics of Multisided Platforms, Harvard Business Review Press, 2016.

[KPMG17] KPMG, Global Automotive Executive Survey, KPMG, 2017.

[KPMGNL17-1] KPMG NL, Platform Advisory – Talkbook, Amstelveen: KPMG Advisory N.V., 2017.

[KPMGNL17-2] KPMG NL, The Connected Car is here to stay, Amstelveen: KPMG Advisory N.V., 2017.

[KPMGUK17] KPMG UK, Mobility 2030, KPMG UK, 2017.

[Morr16] D.Z. Morris, Today’s cars are parked 95% of the time, Fortune.com, http://fortune.com/2016/03/13/cars-parked-95-percent-of-time/, 2016.

[OPSI17] OpenSignal, State of Mobile Networks: Netherlands (September 2017), Opensignal.com, https://opensignal.com/reports/2017/09/netherlands/state-of-the-mobile-network, 2017.

[RIJK17-1] Rijksoverheid, Infrastructure PPP projects, Government.nl, https://www.government.nl/topics/public-private-partnership-ppp-in-central-government/ppp-infrastructure-projects, 2017.

[RIJK17-2] Rijksoverheid, The Netherlands as a proving ground for mobility, Government.nl, https://www.government.nl/topics/mobility-public-transport-and-road-safety/truck-platooning/the-netherlands-as-a-proving-ground/, 2017.

[RTL17] RTL Nederland, Brabant laat zelfrijdende auto’s testen op de openbare weg, RTL Nieuws, https://www.rtlnieuws.nl/geld-en-werk/brabant-laat-zelfrijdende-autos-testen-op-de-openbare-weg, 2017.

[TECR17] TechCrunch, Amber Mobility to launch self-driving service in the Netherlands by 2018, TechCrunch.com, https://techcrunch.com/2017/04/25/amber-mobility-to-launch-self-driving-service-in-the-netherlands-by-2018/, 2017.

[Tiwa13] A. Tiwana, Platform Ecosystems, Elsevier Science & Technology, 2013.

[TNO17] TNO, Truck platoon matching paves the way for greener freight transport, TNO.nl, https://www.tno.nl/en/focus-areas/urbanisation/mobility-logistics/reliable-mobility/truck-platoon-matching-paves-the-way-for-greener-freight-transport/, 2017.

[TUEI17] TU Eindhoven, Strategic Area Smart Mobility, TU Eindhoven University of Technology, https://www.tue.nl/en/research/strategic-area-smart-mobility/, 2017.

The new US assurance standard SSAE 18

Organizations continue to outsource parts of their business to realize potential cost benefits, to alleviate the need for hiring or retaining internal specialists and/or to create more flexibility to realize their business strategy. Outsourcing also impacts both user and service organizations, the audit of the financial statements and the auditors involved in financial and IT audits. It is important to remain in control of the outsourced part of the business and IT processes. Assurance reports play an important role as a management control. In the USA, the new SSAE 18 standard was introduced in 2016 and implemented in 2017. Although the impact is indirect, because outside the USA local or international standards apply, such as the ISAE 3000 and ISAE 3402 standards, it is still an important transition, as the USA usually leads the development of these assurance standards globally. In this article we will look at the various assurance standards and perform an analysis on the impact for international, Dutch and American standards.

Introduction

In April 2016, the American Institute of Certified Public Accountants (AICPA) Auditing Standard Board (ASB) issued the Statement on Standards for Attestation Engagements (SSAE) No. 18, Attestation Standards: Clarification and Recodification. The effective implementation date is for service auditor’s reports dated on or after 1 May 2017. If your organization is routinely involved in the process of issuing a SOC1 (Service Organization Control) report, or your organization is considering issuing such an assurance report, you may wonder what changes the new standard SSAE 18 will bring. This will also be relevant to the user organizations receiving these reports. In this article, we start by introducing the main assurance standards currently in use. We discuss the reasons why a new standard was deemed necessary and then consider the main changes and potential impact for auditors, user and service organizations. In the next section, the formal differences between SSAE 18 and ISAE 3402 are highlighted, and the impact on dual purpose reporting will then be discussed, followed by a brief conclusion.

What are the applicable assurance standards and how did they develop over time?

For as long as organizations have relied on third party providers for the delivery of outsourced services, those organizations have had a need for information about the internal controls at third parties. Whether the information was specific to how the third party ensured the quality of the services provided, the accuracy and completeness of transactions processed, the security of the information, or the overall health of its controls environment, many of the basic drivers for reliable and timely reporting have not changed over the last decades. For many years, assurance reports used in the Netherlands have been mainly based on the publications from the Dutch Institute for Public Auditors (NBA, previously NIVRA) number 26 (1982) and 53 (1989), on reliability and continuity of Electronic Data Processing, which formed the standard for providing assurance reports. These reports were mostly called Third Party Statement Reports (abbreviated to TPM in the Netherlands).

C-2017-4-Beek-01-klein

Figure 1. Timeline of assurance standards. [Click on the image for a larger image]

In early 2002, the United States issued the Sarbanes-Oxley (SOx) Act and Regulation. This regulation mandated publicly listed companies to issue an internal control statement along with their financial statements. The internal controls statement had to be based on a structured and substantiated system of internal control, which was often based on the COSO standard. The financial auditor had to examine and report on the internal control statement of management. In SOx for outsourced processes and controls, the SAS 70 assurance standard was used, which therefore also became popular in Europe and the Netherlands. The primary reasons were that many of the US-listed companies leveraged service organizations from outside the USA or companies centered in Europe became SEC listed. Since obtaining an SAS 70 was mandatory, many companies considered this to be the standard. Later, the international ISAE 3402 standard was developed (2009) which each country could translate into local regulations, which became ‘Standaard 3402’ or ‘Richtlijn 3402’ in the Netherlands and SSAE 16 in the USA. Due to all these different standard numbers the America Institute of Chartered Public Accountants introduced the term Service Organization Control (SOC), and used SOC1 as equivalent for the SSAE 16. Also outside the USA the term SOC1 is now often used as equivalent for the ISAE 3402.

Since ISAE 3402 has scope limitations, because it only applies to internal controls that are relevant for the financial statements, there was also a need for assurance reports with a broader scope. In the ISAE 3402 standard, it is stated that the reporting and the assurance model can also be used with a broader scope, but that in that situation the ISAE 3000 standard has to be used. In the USA the SOC2 and SOC3 standards were developed for Security, Availability, Processing Integrity, Confidentiality and/or Privacy in 2011.

The most common assurance standards in the Netherlands nowadays are the Standaard/Richtlijn 3402 and Standaard/Richtlijn 3000, from both NBA and NOREA. Both standards are based on the international standards ISAE 3402 and ISAE 3000. Recently, both have been changed due to the Dutch clarification project, although with limited impact on either the service organization or the user organization. The main changes are using the term ‘management statement’ rather than ‘management assertion’ and referring more clearly to quality standards applicable for assurance engagements. Are these the only assurance standards in the Netherlands? No, there are others, such as for Sustainability the Standard 3410 and for Privacy the Standard 3600 by NOREA. However, as these standards will not be affected by the changes in SSAE18, we will elaborate on other assurance standards in this article.

Why a new standard in the USA?

As described above, since the early nineties, service organizations have been using public accountants service auditor reports to communicate the basic drivers for reliable and timely information to interested parties. The American Institute of CPAs (AICPA) has been and remains the industry leader in providing organizations with a reporting framework for obtaining information on the service organization services, as well as providing the service providers and auditors with guidance for producing and interpreting this assurance information.

The AICPA’s Auditing Standards Board (ASB) is the entity that maintains and develops the standards upon which service auditors reports are produced and relied on, and the underlying auditing procedures that produce these reports. In the summer of 2004, the AICPA’s ASB embarked on a plan to improve the readability and understandability of its standards (including the service auditors reporting standards), as well as to increase the alignment of the US standards with similar standards managed by its international counterpart, the International Auditing and Assurance Standards Board (IASB). This clarity and convergence project has already resulted in significant changes in the service auditor reporting world, including the elimination of SAS 70 reports that were used for almost 20 years and the establishment of the service organization control or ‘SOC’-branded reports SOC1, SOC2 and SOC3.

Perhaps more noteworthy is the identification, separation and codification of the various auditing and attestation standards themselves. Simply put, SSAE 18 is the standard that recodifies all the previous attestation standards. It is the culmination of the efforts to clarify the various standards for performing attestation engagements, which include combining, among many others, SOC1 (commonly referred to as SSAE 16) and SOC2 and SOC3 (sometimes referred to as AT section 101, which actually was a part of SSAE 16) into a single set of standards for auditors. Undoubtedly there will be report users, service providers and auditors, who will now be referring to SSAE 18 in much the same way that SAS 70, SSAE 16 and AT 101 are currently being used. However, for many parties these acronyms alone may be nothing more than general references to concepts or ideas about audits or assurance reports.

The scope of SSAE 18 will impact all attestation engagements in the USA, except for a limited number of exceptions. It will impact all Service Organization Control (SOC) reports like SOC1, SOC2 and SOC3. SSAE 18 is effective for Service Auditor’s reports dated on or after 1 May 2017. Earlier adoption is permitted. Since the required implementation is based on the date of the Service Auditor’s Report, the new standards have the potential to impact a wide range of reporting periods.

For SOC reporting in the USA, the recodification of attestation standards (SSAE 18) is largely a simplified version of existing standards. The net effect is that an SSAE 16 SOC1 will look nearly identical to an SSAE 18 SOC1. The practitioners performing the attestation engagements will not notice many material changes in the standards; however, there are a few key aspects worth nothing for SOC1 reports, which we will discuss in the next section.

It is clear that the changes could be seen as a response to findings by the PCAOB in their reports on the quality of the financial audits. Re-emphasizing and clarifying standards can indeed help in the implementation of good practices in using the SOC reports as part of the financial statement audit. In that sense, the implemented changes could also help European practitioners to improve the quality of the assurance work performed.

Of course, the new SSAE 18 is still in line with the international (global) ISAE standards. In fact it is the Assurance Standard Board’s strategy to converge its standards with those of the IASB. SSAE is the container for all attestation engagements, including the global standards 3000, 3402 and 4400. So, SSAE 18 is more than solely the US equivalent of the ISAE 3402.

C-2017-4-Beek-t01-klein

Table 1. Organization of the clarified attestation standards. [Click on the image for a larger image]

The equivalent of the ISAE 3000 is the AT-C 205 (plus the AT-C 105 which is always applicable). The equivalent of the ISAE 3402 is the AT-C 320 (plus the AT-C 205 and 105). This is the same structure as the ISAE framework, where in the ISAE 3402 it is stated that to comply with the ISAE 3402 the auditor must also comply with the ISAE 3000.

For Europe (and other countries) the impact remains to be seen. The differences between the old SSAE 16 and ISAE 3402 are very limited. It was therefore relatively easy in some countries to provide so-called dual reporting reports, which provide assurance according to both national and international standards. With the introduction of SSAE 18, these differences appear to widen. This effect has to be considered in case of dual reporting. The old and the new standards have appendices (Exhibit B in SSAE 16 and Appendix F in SSAE 18) to highlight substantive differences between SSAE 16 respectively 18 and ISAE 3402, and to explain the rationale for these differences. The same nine substantive differences are highlighted with minor textual changes.

One can also argue that SSAE 18 re-emphasizes certain aspects which are already part of ISAE 3402, but some formal new rules must be taken into considerations and can probably also be followed when issuing an ISAE 3402. In this way the ISAE 3402 and SSAE 18 are still close and dual reporting is possible. We will discuss this in more detail at the end of this article.

Changes or a formalized good practice?

As the SSAE 18 is more recent than the ISAE 3402, some new rules were introduced, although they should not really be new to service organizations and service auditors that already adhere to the common good practices in audit standards. The changes or revisions are mainly based on convergence with ISAE 3000 (revised version) and with certain changes made to reflect professional standards. Table 2 outlines the ‘new’ elements in the SSAE 18 that are most important.

C-2017-4-Beek-t02-klein

Table 2. Most important ‘new’ elements in the SSAE. [Click on the image for a larger image]

1. Information produced by the service organization (IPE)

In financial audits, especially for controls testing, the auditor is required to evaluate evidence around the completeness and accuracy of the so-called Information Produced by the Entity (IPE) (e.g. lists of data that have specific characteristics, exception documents, transaction reconciliations, documentation that provides evidence of the operating effectiveness of controls, such as user access lists and system generated reports), and the ability to include the procedures performed in the description of the tests of the controls. The audit technique for controls testing for financial audit and for SOC1 purposes should, of course, be similar. It is therefore clearly stated in the SSAE 18 that the service auditor is required to evaluate whether the information is sufficiently reliable and precise, and obtain evidence around completeness and accuracy.

The practical impact on the service organization could be high, as the service organization has to provide the service auditor with a thorough understanding of how information is produced and provide additional documentation as needed. The SSAE 18 standard does not impose a requirement on management to include the procedures in the description, but recommends inclusion in the description of the service auditor’s test of controls. Although not explicitly stated in the ISAE 3402, the service auditor should also review the IPE on completeness and accuracy. This difference is therefore not very big, only making requirements that were already there more explicit. Outside the context of internal control for financial reporting we see that not all parties involved were aware of these requirements. In that case it is good to have this clarification, which will lead to more robust internal control frameworks.

2. Complementary User Entity Controls

From the beginning, the definition of the relevant ‘Complementary User Entity Controls’ (CUEC) was a challenging section and many service organizations had problems to write this section in the system description. As a consequence, a listing of user responsibilities or controls that were not necessary to achieve the control objectives were listed as well. The new standard now reinforces the fact that CUECs should only include controls that are necessary to achieve the control objectives stated in the management’s description. The service auditor should evaluate the CUECs to determine if these are necessary to achieve the control objectives. If they are not, then they should be removed.

As with the previous aspect, this is not explicitly stated in the ISAE 3402, but was certainly intended as it now is in the SSAE 18, and should be common practice. For a more detailed analysis and practical solutions we refer to [Beek17].

3. Complementary subservice organization controls

In the USA, this revision is seen as the biggest change from the old standard. SSAE 18 requires that controls are implemented at the service organization that monitors the effectiveness of their controls over the subservice organization. The following monitoring activities are given as examples:

  • reviewing and reconciling output reports;
  • holding periodic discussions with the subservice organization;
  • making regular site visits to the subservice organization;
  • testing controls at the subservice organization by members of the service organization’s internal audit function;
  • reviewing Type 1 or Type 2 reports on the subservice organization’s system;
  • monitoring external communications, such as customer complaints relevant to the services provided by the subservice organization.

When using the carve-out method to report on subservice organizations, the management will be required to identify controls that they assume will be implemented by these subservice organizations, and that are necessary to achieve the control objectives stated in the management’s description. Of course, this is exactly the same as when user organizations and their auditors who obtain a SOC1 report are expected to evaluate their own controls, as stated in the Complementary User Entity Controls in the system description. Therefore, if a service organization obtains a SOC1 report from a subservice organization, it is important that the management and the service auditor evaluate the Complementary User Entity Controls in the SOC1 report of the subservice organization.

The impact on the service organization could be substantial, as the existing set of subservice organizations presented in the report needs to be evaluated and the impacted control objectives identified. The controls at each carved-out subservice organization to achieve the control objectives need to be linked and included in the description. A common example is when a service organization outsources data center operations to a co-location facility or its platform hosting services to a cloud services provider. In both instances, the service organization normally assumes that the co-location provider or cloud service provider has implemented controls regarding physical and/or logical safeguarding of their operating environment. As a result, those safeguards and controls would complement the additional controls to be performed by the service organization itself. In these instances, a description of such assumed complementary control should be included in the service organization’s system description. As a result of this revision, the Service Auditor’s report and the management assertion also need to be amended to reflect these changes.

Box 1. Example of changes in the management assertion due to control over subservice organizations (bold text).

Illustrative example of changes to management’s assertion (excerpt)

  1. The controls related to the control objectives stated in the description were suitably designed and operated effectively throughout the period [date] to [date] to achieve those control objectives if subservice organizations and user entities applied the complementary controls assumed in the design of XYZ service organization’s controls throughout the period [date] to [date]. The criteria we used in making this assertion were that:
    1. The risks that threaten the achievement of the control objectives stated in the description have been identified by the management of the service organization.
    2. The controls identified in the description would, if operating effectively, provide reasonable assurance that those risks would not prevent the control objectives stated in the description from being achieved.
    3. The controls were consistently applied as designed, including whether manual controls were applied by individuals who have the appropriate competence and authority.

4. Obtaining evidence regarding design of controls

This SSAE 18 revision changes the requirements for the service auditor with respect to obtaining evidence regarding the design of the control. The service auditor is to assess whether the controls identified by the management were suitably designed to achieve the control objectives by:

  • obtaining an understanding of management’s process for identifying and evaluating the risk that threatens achieving the control objectives and assessing the completeness and accuracy of managements identification of those risks;
  • evaluating the link of the controls with these risks;
  • determining that the controls have been implemented.

The management was already responsible for the design of controls under SSAE 16 and ISAE 3402; however, under the SSAE 18, the auditor will focus on management assessment rather than performing an independent assessment. As a result, the management should ensure that it is prepared to provide and discuss its assessment with the service auditor.

5. Review of internal audit reports and regulatory examinations

Whether or not the service auditor will make use of the work of Internal Audit, in the new standard the service auditor is required to review internal audit reports and regulatory examinations relating to the services provided to user entities and to the scope of the report. This requirement to review internal audit and regulatory examination reports is relevant for taking the findings into consideration as part of the risk assessment and in the nature, timing, and extent of the testing. The service organization needs to provide these reports to the service auditor completely and on time.

6. Management assertion

Several aspects are reinforced and explained more clearly in the SSAE 18:

  • Evaluation of criteria. In the management assertion, the management is required to assess the suitability of the criteria in the system description. The standard defines information that must be included as a minimum. In practice, information was sometimes left out on the assumption that it was irrelevant. An example is: ‘The types of services provided, including, as appropriate, the classes of transactions processed.’, while a common hosting provider may argue that no transactions are processed, and thus will want to leave it out. Now, the management and the service auditor should ensure that the assertion contains all the criteria and that the management has included an explanation of how particular criteria are not relevant in their description.
  • Materiality language. SSAE 18 deletes the phrase ‘in all material respect’, as it relates to the management assertion. A service organization will not be permitted to limit its assertion using the phrase ‘in all material respects’.
  • Management assertion versus description. SSAE 18 reinforces that the management assertion is not part of the management’s description. If the assertion and description are included in the same section, they need to be clearly distinguished.

What are the more formal differences between SSAE 18 and ISAE 3402?

Many of the paragraphs in SSAE 18 have been converged with the related paragraphs in ISAE 3000 (revised). An important exception is that in the USA, a practitioner is not permitted to issue an assurance report if there is no written management assertion about the subject matter in scope, in relation to the criteria. In the ISAE 3000, this is not required, but seen as a good practice.

Next, we will outline the most important differences between the SSAE 18 and ISAE 3402 standards. This excludes terminology differences, such as the management’s statement under ISAE 3402 that is called the management’s assertion under US standards.

Fraud by service organization personnel

If the service auditor notifies a deficiency, the auditor has to investigate the nature and cause of the deficiency. That is the same in both standards. In the new US standard, an additional requirement is included that if the service auditor becomes aware of fraud by the service organization personnel, the service auditor has to evaluate the risk that the system description is not fairly presented, controls are not suitably designed, and – in a Type 2 engagement – the controls are not operating effectively. Although this is not explicitly mentioned in the ISAE 3402, the discovery of fraud would normally have the same effect. Information about fraud effects the nature, timing, and extent of the service auditor’s procedures.

Anomalies

ISAE 3402 contains a requirement that enables a service auditor to conclude that a deviation identified during controls testing, with regard to sampling, is not representative for the entire population. SSAE 18 has not replicated this requirement, as it uses terms such as ‘in extremely rare circumstances’ and ‘a high degree of certainty’ that are not used in US professional standards.

Direct assistance

SSAE 18 allows the use of the service provider’s internal audit staff. ISAE 3402 does not address it, but does not prohibit it either. However, we note that in the Netherlands direct assistance is not allowed as part of the financial statement audit.

Documentation completion

The requirement for completing the administrative process of assembling the final engagement file on a timely basis, is worded slightly different in both standards. The SSAE 18 indicates that a timely basis is no later than sixty days following the service auditors report release date.

Subsequent events and subsequently discovered facts

These refer to events that occur between the end of the audit period and the issuing of the report, and that could have a significant effect on the subject matter or assertion. The ISAE 3402 does not require the service auditor to apply procedures other than making an inquiry. SSAE 18 requires auditors to also apply other appropriate procedures, like the review of documentation to obtain evidence regarding such events.

Reading reports of the internal audit function and regulatory examinations

In the SSAE 18, the service auditor is explicitly required to read the relevant reports of the internal audit function and regulatory examinations. These findings are considered part of the risk assessment and for determining the nature, timing and extent of the tests. Although not explicitly in the ISAE 3402, this should be common practice for the auditor, regardless of which standard is used.

Required written representations

SSAE 18 requires the service auditor to request specific written representations from the management. Some are not required by ISAE 3000 or ISAE 3402. In addition, there are differences with respect to the actions the service auditor is required to take when the management refuses to provide written representations.

Elements of the service auditor’s report required by SSAE 18, but not by ISAE 3402

SSAE 18 has certain additional requirements regarding the content of the service auditor’s report. These requirements are very similar to the ‘old’ standard SSAE 16. These additional requirements are that the report for example includes a statement:

  • that controls and control objectives included in the description are those that the management believes are likely to be relevant to user entities internal control over financial reporting, and that the description does not include those aspects that are not likely to be relevant to user entities internal control over financial reporting;
  • that if the work of the internal audit function has been used to obtain evidence, then the report must include a description of the internal auditor’s work and of the service auditor’s procedures with respect to that work;
  • that if the application of a complementary subservice organization’s controls is necessary to achieve the related control objectives stated in the management’s description of the service organization’s system, then a statement to that effect is required;
  • that the report is not intended to be and should not be used by anyone other than the specified parties.

We listed some of the new text elements to get an understanding. For the complete set we refer to the standard.

Elements of the report required by ISAE 3402, but not by SSAE 18

ISAE 3000 requires the inclusion of a statement that the firm of which the practitioner is a member applies the International Standard on Quality Control (ISQC)1 or requirements, which are at least as stringent as the SSAE 18. Moreover, ISAE 3000 requires that the practitioner complies with the independence and other ethical requirements of the Code of Ethics for Professional Accountants or requirements that are at least as demanding. SSAE 18 does not contain these requirements.

Overall, when comparing SSAE 18 with SSAE 16, we note that slight changes have taken place, but that most of the differences with IASE 3402 have remained the same.

What about dual reporting?

User organizations sometimes lack understanding of the specific content of the standards (behind the acronyms). When a US user organization requested a SSAE 16 report from a service organization that is located in the Netherlands, the service organization auditor has to explain that they will provide an assurance report under the Dutch ‘Standaard 3402’. In fact, that is the translated Dutch equivalent of the ISAE 3402 standard. The service auditor could consider to perform dual reporting, so that he provides an assurance report based on both the Dutch and international standards. This would be allowed for auditors in most countries. As SSAE 16 was the most used acronym for the US translation of the ISAE 3402 standard, this should be sufficient in most cases. As the differences between ISAE 3402 and SSAE 16 are very limited, the auditor could, in case of any issues, highlight these differences in a mapping table in (an appendix to) the report, so that the US-based user auditor can evaluate if this would be sufficient for his assurance needs.

In some instances, we noted that in the contract a requirement for a SSAE 16 (or 18) report is included. When the user organization does not understand the assurance context, sometimes the local service auditor is required to provide a dual report based on the SSAE 18 standard. We do not favor this solution. Based on this article, it should be clear that SSAE 18 is more than only the US equivalent. Therefore, we are very hesitant to refer to foreign auditing standards like SSAE 18. Most European auditors are not trained as a US CPA, and have no further guidance about how to overcome the differences between the standards (and references in standard 18 to other US standards and guidelines). Furthermore, an ISAE 3402 report should even be sufficient for US customers, therefore there is no need to refer to foreign standards. With the introduction of the new SSAE 18 standard, the situation is much the same, but as mentioned above, potentially more differences may be present. If dual purpose reporting takes place as an exception, it would be wise to use a checklist (as discussed in the previous section), based on the changes which SSAE 18 brings (in comparison to the SSAE 16) and the formal differences between SSAE 18 and ISAE 3402. Based on such a checklist, the service auditor can show the user auditor how he has considered the relevant issues. Based on the checklist, the user auditor can evaluate whether additional work is needed, or that he can rely on the work performed in the context of SSAE 18. In our opinion, using that solution would make dual reporting no longer necessary at all.

Conclusion

Although the changes as a result of the new assurance standard SSAE 18 could be considered as limited, the impact could be significant in the ISAE 3402 or a local standard context. If the service organization and the practitioner have not followed the development of international professional audit standards, there will be a bigger impact. We would expect this especially in situations where assurance reports are used outside the context of internal control for financial reporting. It could then create a good evaluation moment if another assurance standard like ISAE 3000 would not be more appropriate. Most changes relate to improving the quality of assurance reports in the context of financial statement audits, which is beneficial for all parties involved. We consider it to be common sense to also implement changes resulting from the SSAE 18 in assurance engagements, in the case of ISAE 3402 or local standards.

References

[Beek12] J. van Beek, Praktijkgids 4; Service Organisatie Control-rapport, ISAE3402, KPMG, 2012.

[Beek17] J. van Beek and R. van Vught, User control considerations voor ISAE3402-assurancerapportages, Compact 2017/3.

[Boer13] H. Boer and J. van Beek, Nieuwe ontwikkelingen IT-gerelateerde SOC-rapportages, Compact 2013/2.

[Bruc16] R. Bruckner, Recodifying SOC reports: What SSAE 18 means for SOC1, www.accountingtoday.com, December 2016.

[Gils13] H. van Gils and J. van Beek, Assurance in the Cloud, Compact 2013/2.

[Mell16] M. Mellor and N. Drury, Moving from SSAE 16 to SSAE 18, Compass IT Compliance blog, August 2016.

[Palm16] D. Palmer, Clarified attestation standards-SSAE 18, KPMG, September 2016.

User control considerations voor ISAE3402-assurancerapportages

Steeds meer organisaties en hun accountants krijgen te maken met uitbesteding. Niet enkel de uitbesteding van IT-applicaties; hele processen worden ondergebracht bij zogeheten serviceorganisaties. Hoe houden gebruikersorganisaties en hun accountants grip op deze uitbestede diensten? Het antwoord hierop is vaak door het ontvangen van een assurancerapportage over de uitbestede dienstverlening.  Het assurancelandschap is echter aan het veranderen, en de focus zal daar de komende jaren verschuiven. User control considerations zullen een belangrijkere rol gaan spelen in de assurancerapportage, en gezien de huidige ontwikkelingen in de markt is er een noodzaak om organisaties bewuster te maken op het gebied van user control considerations. Dit artikel gaat in op de user control considerations en wat deze in de praktijk voor uitwerking hebben, zowel voor de serviceorganisatie, service-auditor, gebruikersorganisatie en haar accountant.

Inleiding

Anno 2017 komen organisaties er niet meer onderuit: delen van of gehele bedrijfsprocessen worden uitbesteed aan derde partijen. De strategische keuze om een deel van de bedrijfsprocessen door een gespecialiseerde dienstverlener te laten uitvoeren is steeds meer een normaal onderdeel van de bedrijfsvoering. Het kan de uitbesteding betreffen van een applicatie of IT-helpdesk, maar ook gehele bedrijfsprocessen (bijvoorbeeld de salarisverwerking) worden uitbesteed. Het management van de uitbestedende organisatie draagt echter de verantwoordelijkheid voor het geheel met betrekking tot de financiële verslaggeving, dus ook voor de onderdelen die zijn uitbesteed. Voor uitbestedingen die een (indirect) verband hebben met de financiële verslaggeving van de uitbestedende organisatie is specifiek het ISAE3402-assurance­rapport ontwikkeld, meestal aangeduid als Service Organisatie Control-rapport (SOC1-rapport). Een ISAE3402-assurancerapport is bestemd voor de uitbestedende organisatie (de gebruiker) en haar accountant.

Een belangrijk onderdeel in de assurancerapportage is de paragraaf ‘user control considerations’. Deze wordt ook wel ‘complementary user entity controls’ of in het Nederlands ‘aanvullende beheersingsmaatregelen binnen de gebruikende entiteit’ genoemd in een assurancerapportage. In deze paragraaf wordt beschreven welke beheersingsmaatregelen de gebruikersorganisatie moet inrichten in haar processen om gebruik te kunnen maken van de assurancerapportage. Dit artikel zal aan de hand van de huidige constateringen van de Autoriteit Financiële Markten, de implementatie van de SSAE18-standaard in de Verenigde Staten en de richtlijnen van De Nederlandsche Bank (DNB) een toelichting geven op het wijzigende assurancerapportagelandschap. De user control considerations nemen in deze wijziging een belangrijke plaats in, want in de huidige richtlijnen is dit een vrijblijvend en weinig eenduidig onderdeel. In de praktijk zien wij organisaties eveneens worstelen met deze user control considerations. Aan de hand van een aantal praktijkvoorbeelden zal hier nader op worden ingegaan, waarna praktische handvatten en adviezen worden gegeven waar zowel serviceorganisaties en gebruikersorganisaties direct mee aan de slag kunnen.

Meerwaarde van een ISAE3402-assurancerapport

Het assurancerapport komt voort uit de regels voor onderlinge informatieuitwisseling tussen auditors. Een ISAE3402-assurancerapport sluit naadloos aan op de vereisten uit de COS-standaard 402: ‘overwegingen met betrekking tot controles van entiteiten die gebruikmaken van een serviceorganisatie’. Deze auditstandaard geeft de accountant aan hoe te handelen in situaties waarin zijn controleklant de voor de financiële verslaggeving relevante processen heeft uitbesteed aan een serviceorganisatie ([NBA17]).

In de COS-standaard 402 wordt een aantal stappen onderscheiden voor het controleren van serviceorganisaties. Allereerst dient de accountant inzicht te verwerven in de aard en significantie van de uitbestede diensten. Vervolgens wordt bepaald wat het effect daarvan is op de interne beheersing van de gebruikersorganisatie. Als laatste dienen de risico’s geindentificeerd te worden en dient de accountant in te schatten waar afwijkingen van materieel belang kunnen zijn. Hierbij kan een  ISAE3402-assurancerapport behulpzaam zijn. Aan de hand van het assurancerapport kan worden bepaald welke controleinformatie nodig is om in te spelen op de ingeschatte risico’s en afwijkingen van materieel belang. Als de interne beheersingsmaatregelen van de gebruikersorganisatie effectief lijken te zijn, heeft de accountant een drietal opties met betrekking tot de uitbestede diensten:

  1. het verkrijgen van een ISAE3402 type 2-assurancerapport;
  2. het zelf verrichten van passende toetsingen bij de serviceorganisatie;
  3. het gebruikmaken van een andere accountant of Internal Audit-afdeling om namens hemzelf toetsingen te laten verrichten.

Aangaande de eerste optie (zie figuur 1) ontvangt de accountant een ISAE3402 type 2-rapport betreffende een specifieke periode, meestal zes maanden tot een jaar. Dit rapport beschrijft de processen en  de toereikendheid van de beheersingsmaatregelen, zoals deze gedurende de gedefinieerde periode zijn toegepast voor de serviceorganisatie om de beheersdoelstelling te bereiken. De service-auditor toetst de toereikendheid van de beschreven beheersingmaatregelen voor het bereiken van de beheersings­doelstelling en stelt vast dat de implementatie ervan gedurende de rapportageperiode in overeenstemming met de beschrijving is. Daarnaast wordt de effectiviteit (werking) getest door de service-auditor. Het resultaat hiervan is het ISAE3402-assurancerapport, dat via de serviceorganisatie wordt verstrekt aan de gebruikersorganisatie, die dit rapport kan delen met de gebruikersaccountant. De gebruikersaccountant kan dit assurancerapport vervolgens gebruiken bij zijn jaarrekeningcontrole, om na te gaan of er geen risico’s zijn gelopen met betrekking tot afwijkingen van materieel belang bij de serviceorganisatie.

C-2017-3-Vught-01-klein

Figuur 1. Four-corner-model van ISAE3402-assurancerapportages. [Klik op de afbeelding voor een grotere afbeelding]

De andere twee opties zijn minder aantrekkelijk voor de serviceorganisatie. Deze opties leiden er uiteindelijk toe dat elke gebruikersorganisatie haar toetsingen laat verrichten bij de serviceorganisatie. Dit is verre van efficiënt en tijdrovend voor de serviceorganisatie. Vervolgens moet de accountant van de gebruikersorganisatie evalueren of de verkregen controleinformatie voldoende is.

Het ISAE3402-assurancerapport geeft invulling aan de onbekendheid van het controleraamwerk van de serviceorganisatie, die ontstaan is doordat de gebruikersorganisatie niet in staat is de volledigheid, juistheid en tijdigheid van de uitbestede processen te controleren. De controle kan alleen op indirecte wijze plaatsvinden, op basis van opgeleverde informatie als servicelevel- en incidentenrapportages. Een ISAE3402-assurancerapport is een prima sluitstuk voor het management van de gebruikersorganisatie en haar accountant om zekerheid te krijgen over het totaalbeeld.

In de gebruikersorganisatie kunnen zowel de eerste, tweede als derde lijn gebruikmaken van de informatie uit het assurancerapport. De servicelevelmanager (eerste lijn) kan nagaan of de diensten worden uitgevoerd conform het overeengekomen contract. De risicomanager (tweede lijn) kan een betere risicoinschatting maken van de uitbestede diensten. In het geval van een derde lijn in de gebruikersorganisatie kan de interne auditor het grotere plaatje beoordelen voor de gehele organisatie en haar uitbestedingsbeleid. Het assurancerapport wordt door de accountant van de gebruikersorganisatie gebruikt, in plaats van de eigen testwerkzaamheden voor de werking van de interne controles, die van belang zijn voor de juistheid en volledigheid van de financiële verantwoording van de uitbestedende organisatie. Op basis van het door een collega-auditor afgegeven auditorsrapport en de in het asurancerapport opgenomen resultaten van de testwerkzaamheden kan de accountant beoordelen welke aanvullende werkzaamheden nodig zijn voor de controle van de jaarrekening van zijn opdrachtgever: de uitbestedende organisatie.

Een belangrijk onderdeel in de assurancerapportage is de paragraaf ‘user control considerations’/’complementary user entity controls’, of in het Nederlands ‘aanvullende beheersingsmaatregelen binnen de gebruikende entiteit’. Deze paragraaf bevat de beheersingsmaatregelen die de gebruikersorganisatie zelf moet inbedden in haar eigen processen om gebruik te kunnen maken van het ISAE3402-assurancerapport. De accountant van de gebruikersorganisatie kan controleren of deze voldoet aan alle user control considerations. Dit zijn bijvoorbeeld controles vanuit de gebruikersorganisatie of de gegevens juist, tijdig en volledig worden aangeleverd aan de serviceorganisatie, of dat transacties geautoriseerd worden door de gebruikersorganisatie, alvorens deze worden aangeleverd bij de serviceorganisatie. Vanuit de gedachte dat er sprake is van een procesketen, waarvan een gedeelte is uitbesteed, moeten de user controls van de gebruikersorganisatie en de controls in het assurancerapport van de serviceorganisatie naadloos op elkaar aansluiten. In de praktijk blijken veel gebruikers en  accountants zich niet volledig bewust van de samenhang tussen beide elementen en de eigen verantwoordelijkheid daarin. Door de huidige constateringen van de Autoriteit Financiële Markten (AFM), de implementatie van de SSAE18-standaard in de Verenigde Staten en de richtlijnen van De Nederlandsche Bank (DNB) zullen de user control considerations de komende jaren een prominentere rol innemen in de assurancerapportage.

Toezichthouder AFM

In Nederland houdt de Autoriteit Financiële Markten toezicht op de accountantsorganisaties, met als doel de kwaliteit van de wettelijke controles te verhogen en duurzaam te waarborgen. In het rapport van 25 september 2014 doet de AFM verslag van het reguliere onderzoek bij de Big 4-accountantsorganisaties ([AFM14]). Sinds 1 januari 2014 kan de AFM die resultaten openbaar maken. In het kader van dit artikel is gekeken in hoeverre het ISAE3402-assurancerapport voorkomt in deze rapportage. In de bevindingen over de wettelijke controles wordt een aantal keer gesproken over het gebruikmaken van ISAE 3402-assurancerapportages als onderdeel van de werkzaamheden van de accountant. Er staan weinig specfieke bevindingen in, maar wel wordt in één geval vastgesteld dat de effectieve werking van de relevante beheersingsmaatregelen onvoldoende is.

In de samenvatting van het rapport heeft de AFM ook een aantal generieke tekortkomingen opgenomen; er wordt voor de systeemgerichte werkzaamheden geconstateerd dat de meest voorkomende tekortkoming is dat de accountant met zijn werkzaamheden de effectieve werking van de interne beheersingsmaatregelen niet toereikend heeft getoetst. Opvallend is ook de bevinding dat de AFM in meerdere gevallen oordeelt dat de controlewerkzaamheden, die de externe accountant heeft aangemerkt als ‘systeemgericht’, in feite ‘gegevensgericht’ zijn. In die gevallen heeft de externe accountant namelijk niet de interne beheersingsmaatregelen getest, maar zelf een vorm van detailcontrole uitgevoerd. Je zou ook kunnen zeggen dat de AFM zichtbaar heeft gemaakt dat het principe van ‘trust me, show me, prove me’ ook van toepassing is op het dossier van de accountant.

Op 28 juni 2017 heeft de AFM een nieuw rapport uitgebracht aangaande het reguliere onderzoek bij de Big 4-accountantsorganisaties ([AFM17]). Deze rapportage bevat wederom weinig specifieke bevindingen met betrekking tot het ISAE3402-assurancerapport, en bevestigt daarmee het beeld uit 2014. Uiteraard is er door de accountantskantoren inmiddels veel gedaan om verbeteringen door te voeren en de negatieve bevindingen uit het onderzoek weg te nemen. In dit artikel staan wij specifiek stil bij wat dit in de praktijk betekent voor het gebruik van de ISAE3402-assurancerapportage en de user control considerations. Centraal daarin staat de rol van de accountantscontrole, maar het is ook relevant voor de gebruikersorganisaties zelf. Deze gebruiken de rapporten ook in het kader van de interne controle, en dienen dus ook de effectieve werking van de beheersingsmaatregelen zelf vast te stellen. Lang niet elke gebruikersorganisatie is zich hiervan bewust!

Implementatie van de SSAE18-standaard in de Verenigde Staten

Recent is in de Verenigde Staten de nieuwe SSAE18-standaard ingevoerd ter vervanging van de standaard SSAE16. Deze standaard is van toepasssing op alle assurancerapporten (Service Organization Control-reports; SOC1, SOC2 en SOC3), gedateerd op of na 1 mei 2017. In de publicatie van KPMG uit september 2016 worden alle aspecten behandeld van de gewijzigde standaard en de impact hiervan op de assurancerapportage ([Palm16]). Voor dit artikel is met name het stuk over complementary user entity controls relevant. De aanpassing die hierin wordt doorgevoerd benadrukt dat de complementary user entity controls alleen die controls moeten bevatten die noodzakelijk zijn om de controledoelstellingen te behalen, zoals deze beschreven zijn in de systeembeschrijving van het management. Voor de serviceorganisatie betekent dit dat zij de huidige lijst met complementary user entity controls moet beoordelen, en vervolgens de controls verwijderen die niet noodzakelijk zijn voor het bereiken van de beheersingsdoelstellingen. In de nieuwe standaard wordt benadrukt dat het niet de bedoeling is dat in het rapport een algemene lijst met verantwoordelijkheden van de gebruiker wordt beschreven.

Door deze ontwikkeling zien wij heel duidelijk dat hier de verantwoordelijkheid voor de procesketen wordt benadrukt. Controles van de serviceorganisatie moeten in samenhang worden gezien met controles van de gebruikersorganisatie, waardoor uiteindelijk het systeem van interne controle naadloos op elkaar aansluit. Het is zelfs mogelijk de noodzakelijke gebruikerscontroles op te nemen in de controlematrix in de bijlage van het rapport; zo wordt de samenhang en procesgedachte nog meer benadrukt. De gebruiker van het rapport dient na te gaan welke user control considerations in het rapport zijn opgenomen, of deze relevant zijn, en aan te geven of er opvolging gegeven is aan de uitvoering van de controles.

Uit overleg met collega’s in de Verenigde Staten is duidelijk geworden dat veel van de aanpassingen in SSAE18 ook voortkomen uit bevindingen van de PCAOB: de Amerikaanse toezichthouder. Het is daarom nuttig  voor de Nederlandse praktijk om kennis te nemen van de SSAE18-standaard, en te bepalen wat deze wijzigingen voor impact kunnen hebben op de ISAE3402-standaard.

Richtlijnen van De Nederlandsche Bank

Ten derde is het in Nederland nog van belang om stil te staan bij de rol van De Nederlandsche Bank (DNB), die toezicht houdt op de financiële sector. Veel financiële instellingen hebben een deel van hun werkzaamheden uitbesteed aan serviceorganisaties, bijvoorbeeld de IT-infrastructuur. DNB vindt de beheersing van deze uitbesteding belangrijk, en heeft zodoende richtlijnen opgesteld waaraan uitbesteding zou moeten voldoen, waaronder ook de volgende aandachtspunten vallen:

  • het kiezen van de uitbestedingspartij;
  • het opzetten van een uitbestedingscontract;
  • het inrichten van de interne organisatie en de uitbestedingsrelatie;
  • het monitoren en evalueren van de uitbesteding.

De gebruikersorganisaties moeten kritisch kijken naar de geleverde prestaties en assurancerapportages van de serviceorganisaties. De meeste gebruikers kiezen daarom voor het verkrijgen van een ISAE3402-assurancerapport van de serviceorganisatie. Een ISAE3000-rapport kan echter ook bruikbaar zijn voor de gebruikersorganisatie en haar accountant, mits het voldoende processen voor de financiële verslaggeving afdekt. Uiteraard geldt dat alleen het verkrijgen van het assurancerapport niet voldoende is. Het assurancerapport dient ook te worden beoordeeld om een beeld te krijgen of er nog aanvullende interne controlemaatregelen noodzakelijk zijn. Hierbij speelt ook het aspect van de aantoonbaarheid van de uitgevoerde werkzaamheden door de gebruikersorganisatie een rol. In toenemende mate is dit ook van belang voor DNB, zoals blijkt uit het gevraagde volwassenheidsniveau op het gebied van informatiebeveiliging. Recent heeft DNB aangekondigd dat er een onderzoek komt naar de beheersing van de uitbesteding in de pensioensector, waar naar verwachting ook user control considerations nadrukkelijker aan de orde zullen komen.

Ervaringen vanuit de praktijk

In de afgelopen jaren hebben wij verscheidene ISAE3402-assurancerapportages mogen afgeven aan serviceorganisaties, maar ook als accountants van de gebruikersorganisatie de assurancerapportages mogen beoordelen in het kader van de jaarrekeningcontrole. Deze afgegeven en ontvangen assurancerapportages verschillen in omvang, zijn door verscheidene accountants opgesteld en hebben elk een unieke scoping. Door de huidige constateringen van de AFM, de implementatie van de SSAE18-standaard in de Verenigde Staten en de richtlijnen van DNB is er een verhoogd bewustzijn ontstaan voor de user control considerations in ISAE3402-assurancerapportages. Dit heeft geleid tot een analyse van de user control considerations van zowel de afgegeven als ontvangen assurancerapportages. De conclusie van deze analyse is dat er zich verschillende situaties kunnen voordoen aangaande de user control considerations. De vier meest opvallende situaties zijn de volgende:

  1. Er staan geen user control considerations in de assurancerapportage;
  2. De formulering van de user control considerations is onvoldoende concreet;
  3. De rapportage bevat een wildgroei aan user control considerations;
  4. De gebruikersorganisatie heeft onvoldoende kennis en kunde van (het interpreteren en gebruiken van) assurancerapportages.

Deze vier situaties hebben één overeenkomst, namelijk dat de accountant van de gebruikersorganisatie altijd meer werk heeft aan het beoordelen van de assurancerapportage, en daarmee de uitbestedingsrelatie. De assurancerapportage heeft als doel de werkzaamheden voor de ontvangende partij te beperken, maar in bovengenoemde situaties is dat niet het geval. Dit heeft als gevolg dat de kosten van de accountantscontrole kunnen stijgen. Om deze situaties te kunnen voorkomen, worden ze hieronder kort toegelicht.

1. Geen user control considerations

In de eerste situatie heeft de accountant van de gebruikersorganisatie in het kader van de jaarrekeningcontrole een ISAE3402-assurancerapportage ontvangen waarin geen user control considerations zijn benoemd. Een assurancerapportage zonder user control considerations is uitzonderlijk, en vergt daarom extra attentie van zowel de gebruikersorganisatie als de ontvangende accountant. Bij de uitbesteding van de salarisadminstratie zal de gebruikersorganisatie bijvoorbeeld altijd een akkoord moeten geven voor de juistheid van de uit te keren salarissen door de serviceorganisatie. Dit is een typische user control consideration in een dergelijke uitbesteding. Als deze niet benoemd is in de assurancerapportage is het de vraag of de uit te keren salarissen juist zijn. In het geval er geen user control considerations zijn, zal er dus een naadloze aansluiting moeten zijn tussen het proces van de serviceorganisatie en de gebruikersorganisatie, waarbij er geen afhankelijkheden worden verwacht tussen de partijen.

De gebruikersorganisatie was niet op de hoogte van deze uitzonderlijke situatie, waardoor de ontvangende accountant aanvullende werkzaamheden heeft moeten verrichten. De ontvangende accountant heeft immers tot doel de juistheid en volledigheid van de financiële verslaggeving vast te stellen voor de gebruikersorganisatie, en een onderdeel daarvan is dat moet worden nagegaan of bijvoorbeeld de serviceorganisatie niet ongeautoriseerd transacties doorvoert of modificaties kan maken in de data van de klant.

Door bestudering van het uitbestedingscontract heeft de ontvangende accountant onderzocht wat de afhankelijkheden waren tussen de twee partijen. Daarnaast is ook het controleraamwerk van de gebruikersorganisatie aangesloten op de controlewerkzaamheden uit de assurancerapportage, om na te gaan of er geen ‘gaten’ in het proces zijn ontstaan. Op basis van de aanvullende werkzaamheden kan worden geconcludeerd dat deze werkzaamheden voldoende zijn, en daarmee dat de juistheid en volledigheid van de financiële verslaggeving geborgd is. Er kan echter ook worden geconcludeerd dat er nog extra aanvullende werkzaamheden moeten worden uitgevoerd bij de gebruikers- of serviceorganisatie, omdat er ‘gaten’ zijn in de procesketen. In dat geval zal de gebruikersaccountant bijvoorbeeld een bezoek brengen aan de serviceorganisatie en daar de toegangsrechten tot de specifieke applicatie beoordelen.

Uit deze situatie kan dus worden geconcludeerd dat het niet opnemen van user control considerations zorgt voor onzekerheden aangaande de verantwoordelijkheden bij de uitbestedingsrelatie en derhalve ook zorgt voor meer werk bij zowel de gebruikers- als serviceorganisatie.

2. De user control considerations zijn onvoldoende concreet

Dat zowel de serviceorganisatie en service-auditor het lastig vinden om een goede user control consideration te formuleren, wordt zichtbaar in situatie twee. In deze situatie zijn er user control considerations geformuleerd in de assurancerapportage, maar deze zijn onvoldoende concreet gemaakt. De user control considerations zijn vaag geformuleerd, meestal multi-interpretabel en er wordt in het midden gelaten wie verantwoordelijk is.

Een voorbeeld hiervan is de volgende user control consideration: ‘De scope van de interne beheersingsmaatregelen in dit rapport is dusdanig vormgegeven dat rekening is gehouden met het feit dat de gebruikers zelf over interne beheersingsmaatregelen beschikken, onder andere op het gebied van de juiste, volledige en tijdige overdracht van informatie tussen serviceorganisatie ABC en ontvangende partij XYZ.’ Als deze user control consideration goed wordt gelezen is het eigenlijk een soort disclaimer, waarin de serviceorganisatie aangeeft dat wanneer de interne beheersingsmaatregel niet is opgenomen in de asssurancerapportage, deze onderdeel moet zijn van de interne beheersing van de gebruikersorganisatie. Ook het gedeelte ‘onder andere’ in deze user control consideration geeft aan dat de gebruikersorganisatie meer controles moet hebben geregeld in haar interne processen, maar deze user control consideration maakt  niet duidelijk welke controles dat betreft.

In deze situatie is het voor zowel de gebruikers als de ontvangende accountant lastig een sluitend geheel te krijgen tussen het interne raamwerk en de uitbestede dienstverlening. Er wordt immers in het midden gelaten wie verantwoordelijk is voor de ‘gaten’ in het proces. De aangegeven user control consideration geeft geen norm, en maakt het daarom ook moeilijk deze te controleren door de ontvangende accountant. In zulke gevallen zullen een ontvangende accountant en de gebruikersorganisatie kritische vragen moeten stellen aan de serviceorganisatie, en wellicht ook aanvullende werkzaamheden uitvoeren om ervoor te zorgen dat de procesketen volledig wordt gecontroleerd.

3. Wildgroei aan user control considerations

Ter aanvulling op de onvoldoende concrete en multi-interpretabele user control considerations is in situatie drie geconstateerd dat er ook assurancerapportages zijn waarin meer dan vijfentwintig user control considerations zijn geformuleerd door de serviceorganisatie en service-auditor. In zowel de SSAE18-standaard als ISAE3402-standaard zijn geen richtlijnen opgenomen voor het aantal beheersingsmaatregelen als user control considerations. Voor de gebruikersorganisatie kan het echter veel werk zijn alle user control considerations in haar processen op te nemen, waarna het ook arbeidsintensief is voor de gebruikersaccountant om vast te stellen of alle user control considerations zijn afgedekt door de gebruikersorganisatie.

In een specifieke situatie met betrekking tot de wildgroei aan user control considerations was een groot aantal van de beheersingsmaatregelen duidelijk geformuleerd en ingebed in de dagelijkse processen van de gebruikersorganisatie. Er was echter ook een aantal beheersingsmaatregelen dat vaag en multi-interpretabel was omschreven en waarvan zowel de gebruikersorganisatie als haar accountant niet begreep waar de serviceorganisatie exact op doelde. Wederom is in deze situatie een vergelijking gemaakt tussen het uitbestedingscontract en de user control considerations, waarbij de uitkomst uitwees dat er geen 1-op-1-connectie tussen deze twee documenten gemaakt kon worden. Een intensieve exercitie tussen de gebruikersorganisatie en ontvangende accountant was in dit geval nodig om er zeker van te zijn dat alle user control considerations waren afgedekt door de interne processen van de gebruikersorganisatie.

Zoals eerder beschreven zijn deze exercities arbeidsintensief, hetgeen niet het doel is van de assurancerapportage. Zowel in situatie één, twee als drie had het aanvullende werk beperkt kunnen worden als er vroegtijdig contact was gelegd tussen de serviceorganisatie en gebruikersorganisatie aangaande de scope van de assurancerapportage.

4. Kennis en kunde van de gebruikersorganisatie

De voorgaande drie situaties gingen veelal in op de inhoud en formulering van de user control considerations. Deze voorgaande situaties raken echter ook de vierde situatie die zich heeft voorgedaan in onze analyse: de kennis van de gebruikersorganisaties is niet altijd up-to-date met betrekking tot ISAE3402-assurancerapportages.

  1. In situatie één ondernam de gebruikersorganisatie pas actie na een constatering van de gebruikersaccountant. De gebruikersorganisatie wist immers niet dat er user control considerations benoemd hadden moeten worden in de assurancerapportage.
  2. In situatie twee ging de gebruikersorganisatie er op basis van de user control considerations van uit dat ze geen extra werkzaamheden hoefde uit te voeren voor de assurancerapportage, behalve het op orde hebben van haar interne beheersing. Dit is natuurlijk een voorwaarde voor het gebruiken van een assurancerapportage, maar de vage en multi-interpretabele formulering zorgt voor onduidelijkheden over welke interne beheersing en verantwoordelijkheden dit precies omhelst.
  3. In de derde situatie wist de gebruikersorganisatie zich geen raad met de wildgroei aan user control considerations. Deze zorgde wederom voor onduidelijkheid en tevens veel aanvullende werkzaamheden voor de gebruikersorganisatie en haar accountant.

Er kunnen verschillende redenen zijn voor de beperkte kennis en kunde op het gebied van assurancerapportages bij de gebruikersorganisatie. Een daarvan is dat het contractmanagement en monitoren van de uitbestedingsrelatie onderdeel is van een meeromvattende functie van één werknemer. De IT-manager is hier bijvoorbeeld verantwoordelijk voor, maar deze moet daarnaast ook de IT-afdeling managen en de interne beheersing op orde houden. Hierdoor wordt diens beschikbare tijd verspreid over verschillende werkgebieden. Derhalve zal er beperkt aandacht zijn voor de uitbestede dienstverlening, waardoor een gedegen beoordeling van de uitbestedingsrelatie en ISAE3402-assurancerapportage te wensen overlaat.

Daarnaast kan het ook zo zijn dat de serviceorganisatie delen van haar bedrijfsvoering heeft uitbesteed, waardoor ook sub-serviceorganisaties zijn opgenomen in de assurancerapportages. Deze sub-serviceorganisaties maken de procesketen complexer, waardoor deze voor de gebruikersorganisatie ongrijpbaar kan worden. De gebruikersorganisaties moeten dan namelijk nagaan of deze sub-serviceorganisaties ook invloed hebben op de interne beheersingsmaatregelen, wat het geheel ingewikkeld maakt.

Daarom is het van belang dat de gebruikersorganisatie zich ervan bewust is dat ze de ISAE3402-assurancerapportage moet beoordelen, hier conclusies uit trekt en daarover (tijdig) in gesprek gaat met haar serviceorganisatie. Daarbij hoort ook de controle of de user control considerations onderdeel zijn van de interne beheersing van de gebruikersorganisatie. In de praktijk zien wij dat gebruikersorganisaties veelal de assurancerapportage lezen, maar hier geen of nauwelijks actie door ondernemen. Daarom is aan dit artikel een stappenplan toegevoegd voor het reviewen van een ISAE3402-assurancerapportage.

Stappenplan voor de review van een ISAE3402-rapportage

Zoals hiervoor al beschreven is, hebben vele gebruikersorganisaties moeite met een gedegen beoordeling van de ontvangen assurancerapportage. Enerzijds komt dit doordat niet altijd de kennis en kunde hiervoor aanwezig zijn in de organisatie, anderzijds is de hiervoor verantwoordelijke werknemer van de gebruikersorganisatie ook belast met andere werkzaamheden. Derhalve is in figuur 2 een stappenplan op hoofdlijnen opgenomen, om in de toekomst over te gaan tot een efficiëntere accountantscontrole, waarin gebruikersorganisaties  voorbereid zijn op het reviewen van een assurancerapportage.

C-2017-3-Vught-02-klein

Figuur 2. Stappenplan voor de beoordeling van een ISAE3402-assurancerapportage. [Klik op de afbeelding voor een grotere afbeelding]

Stap 1: Lees de ontvangen assurancerapportage

Neem de tijd om de assurancerapportage goed door te nemen. De rapportage hoeft niet in detail te worden bestudeerd; de tekst – met name Sectie III met daarin de beschrijving van de serviceorganisatie – kan globaal worden gelezen. Als lezer moet je gevoel krijgen met de inhoud van de assurancerapportage.

Stap 2: Bepaal het type rapport, de periode en het bereik van de assurancerapportage

In deze stap wordt de assurancerapportage grondiger bestudeerd. Hierbij dienen de volgende vragen te worden beantwoord:

Wat voor type assurancerapport heeft de gebruikersorganisatie ontvangen? In het kader van de jaarrekeningcontrole zijn ISAE3402-rapportages het meest geschikt, aangezien deze specifiek bedoeld zijn voor de financiële verantwoording. Een ISAE3000-rapportage kan in sommige gevallen echter ook worden gebruikt voor de accountantscontrole, nadat is vastgesteld dat de relevante processen hiervoor zijn opgenomen in de rapportage. Daarnaast zou een gebruikersorganisatie de voorkeur moeten hebben voor een type 2-rapportage, want hierin wordt ook de werking van de controles getest, en niet enkel de opzet en het bestaan ervan, zoals in een type 1-rapport.

Welke auditor heeft de assurancerappportage opgesteld? En kan de gebruikersorganisatie vaststellen dat deze auditor voldoende professionele kennis, reputatie, objectiviteit en onafhankelijkheid bezit ten opzichte van de serviceorganisatie?

Welke periode dekt de assurancerapportage af? Om als accountant van de gebruikersorganisatie te kunnen steunen op de controles van de serviceorganisatie, zal de werkingsperiode overeen moeten komen met de periode van de accountantscontrole. In het geval van een kortere of verschoven werkingsperiode (bijvoorbeeld een assurancerapportage voor de periode 1 januari tot en met 30 september, terwijl de accountantscontrole tot en met 31 december loopt), zal een zogeheten bridgeletter moeten worden afgegeven door de serviceorganisatie, dan wel zullen er aanvullende werkzaamheden moeten worden verricht door de gebruikersaccountant.

Is er sprake van subserviceorganisaties? Het kan zijn dat de serviceorganisatie ook weer diensten heeft uitbesteed; in dat geval moet na worden gegaan of er sprake is van een zogenaamde ‘carve-out’- of ‘inclusive’-methode. In het geval van een carve-out zijn de diensten van de subserviceorganisatie niet opgenomen in de assurancerapportage, en dient de gebruikersorganisatie een afweging te maken of deze uitbesteding relevant is voor de afgenomen diensten. Ook moet worden nagegaan of er user control considerations zijn opgenomen aangaande deze subserviceorganisatie. Als de gebruikersorganisatie de subserviceorganisatie dermate relevant vindt voor de interne beheersing, moet een assurancerapport worden opgevraagd bij de betreffende subserviceorganisatie. Bij een inclusive-methode zijn de controles opgenomen in de assurancerapportage, en dient de gebruikersorganisatie enkel na te gaan of deze van belang zijn voor de afgenomen diensten.

Stap 3: Bepaal of alle uitbestede diensten onderdeel zijn van de assurancerapportage

In deze stap dient de gebruikersorganisatie vast te stellen of alle uitbestede diensten worden afgedekt door het controleraamwerk van de serviceorganisatie. De gebruikersorganisatie kan deze controle uitvoeren door het uitbestedingscontract en de daarin overeengekomen uitbesteding te vergelijken met de user control considerations en de gecontroleerde diensten in de assurancerapportage van de serviceorganisatie. Indien deze op elkaar aansluiten zijn er geen ‘gaten’ in het proces en is er sprake van een volledige keten.

In het geval dat er bevindingen zijn bij de uitbestede diensten, dient de gebruikersorganisatie te overleggen met haar serviceorganisatie om na te gaan wat de impact van deze bevindingen is op de gebruikersorganisatie.

Stap 4: Bepaal of alle user control considerations onderdeel zijn van de interne beheersingsprocessen

In deze stap dient de gebruikersorganisatie na te gaan welke user control considerations zijn opgenomen in de rapportage en voor haar accountant aan te tonen dat deze onderdeel zijn van de interne beheersingsprocessen. Op deze manier toont de gebruikersorganisatie aan dat de procesketen sluitend is.

Op praktische wijze omgaan met user control considerations

Ter aanvulling op het stappenplan geven wij tot slot nog een aantal praktische tips voor het op een goede manier omgaan met user control considerations. Het uitgangspunt voor de user control considerations is dat er sprake is van een procesketen waarvan een gedeelte door de gebruikersorganisatie is uitbesteed aan een serviceorganisatie. Sommige controles in de keten worden uitgevoerd door de serviceorganisatie, maar om de keten sluitend te maken, dient de gebruikersorganisatie nog een aantal zaken te controleren. Zoals aangegeven in het stappenplan dient de gebruikersorganisatie dus altijd na te gaan welke user control considerations zijn opgenomen in de assurancerapportage. Op basis van onze analyse vanuit de praktijk  kunnen we daarbij drie situaties onderscheiden.

1. De user control considerations zijn duidelijk en toegesneden op de situatie

Deze situatie is het meest eenvoudig. De accountant moet van de gebruikersorganisatie nagaan of deze de gevraagde testwerkzaamheden aantoonbaar heeft uitgevoerd. Als uit deze testwerkzaamheden verder geen bijzonderheden zijn gebleken kan dit worden gedocumenteerd in samenhang met de verwerking van het assurancerapport.

2. De user control considerations ontbreken in de assurancerapportage

In dit geval zal de gebruikersorganisatie zelf moeten evalueren of ze nog aanvullende controles moet uitvoeren om de keten sluitend te krijgen. Indien dit het geval is zullen deze alsnog aantoonbaar moeten worden uitgevoerd, zodat er geen gaten in de interne beheersing ontstaan. Het is logisch dat er in deze situatie goed overleg moet plaatsvinden met zowel de accountant van de gebruikersorganisatie als de accountant van de serviceorganisatie, zodat duidelijkheid ontstaat over de relevante user control considerations in de keten en wie deze uitvoert. Op deze manier wordt zowel het risico van gaten in de keten, dan wel dubbel werk, zo veel mogelijk voorkomen en in de toekomst kunnen er verbeteringen worden doorgevoerd in het overzicht van de ketencontroles.

3. De user control considerations zijn aanwezig in de assurancerapportage, maar onvoldoende concreet, of het zijn er te veel

In deze situatie zal de gebruikersorganisatie een eigen risicoanalyse moeten opstellen. Deze risicoanalyse zal uitwijzen welke risico’s de gebruikersorganisatie loopt in de procesketen. Indien de user control considerations onvoldoende concreet zijn, is het verstandig te overwegen daarvoor in de plaats eigen controles uit te voeren die wel relevant zijn om de risico’s af te dekken. Indien er te veel user control considerations in het assurancerapport zijn opgenomen, kan gebruik worden gemaakt van de eigen risicoanalyse om te documenteren dat de controles uit het rapport te breed zijn geformuleerd, dan wel niet relevant zijn. Ook hierbij kan worden verwezen naar de eigen controles die aantoonbaar zijn uitgevoerd om de risico’s alsnog af te dekken. Ook in deze situatie is het verstandig nauwe afstemming te zoeken met de accountant van zowel de gebruikers- als de serviceorganisatie, om tot consensus te komen en de situatie in de toekomst transparant te maken.

Conclusie

Door de huidige constateringen van de AFM, de implementatie van de SSAE18-standaard in de Verenigde Staten en de richtlijnen van DNB zullen user control considerations de komende jaren een belangrijke rol spelen in assurancerapportages voor zowel gebruikers- als serviceorganisaties. Op basis van onze praktijkervaring zien wij dat er ruimte is voor verbetering, en gezien de huidige ontwikkelingen is het ook noodzaak om organisaties bewuster te maken van de user control considerations. De huidige praktijk zorgt voor veel onduidelijkheden, onzekerheden en meer werk, zowel voor gebruikers- en serviceorganisaties als hun accountants. De betrokken partijen doen er daarom goed aan om de volgende aanbevelingen mee te nemen in hun toekomstige werkzaamheden aangaande de user control considerations.

Om te beginnen kan er meer duidelijkheid worden gecreëerd aangaande de formulering. Bij de user control considerations is het van belang dat door de serviceorganisatie en haar accountant aandachtig wordt gekeken naar de formulering van de maatregelen. Een user control consideration moet geen dubbelzinnigheden bevatten en helder zijn, voor zowel de service- als gebruikersorganisatie. Er moet een norm in beschreven staan die ook voor de ontvangende accountant toetsbaar is. De service-auditor kan dit toetsen door de user control considerations te bekijken vanuit het perspectief van de gebruikersorganisatie.

Daarnaast moeten de serviceorganisatie en service-auditor een balans vinden in het aantal user control considerations dat ze opnemen in de assurancerapportage. Hoewel er geen richtlijnen aangaande het aantal op te nemen maatregelen zijn, is het voor de gebruikersorganisatie en haar accountant een arbeidsintensief proces om na te gaan of alle user control considerations relevant zijn voor de gebruikersorganisatie en of deze afgedekt zijn in het interne beheersproces.

Ook is het van belang dat de gebruikersorganisatie vroegtijdig op de hoogte is van de user control considerations. Deze zouden gezien het uitbestedingscontract geen verassing mogen zijn, maar de uitbestedingsrelatie is ermee geholpen als de verantwoordelijkheden duidelijk zijn afgesproken. In het uitbestedingscontract moet daarom beschreven zijn welke partij welke werkzaamheden uitvoert en wie er verantwoordelijk is bij eventuele incidenten. Op deze manier kan de gebruikersorganisatie haar interne beheersing goed inrichten en ontstaan er geen ‘gaten’ in de procesketen. Indien dit niet voldoende gespecificeerd is kan de gebruikersorganisatie voor verrassingen komen te staan, zoals aanvullende werkzaamheden en extra kosten van haar accountant, in het geval de user control considerations niet zijn opgenomen in de interne processen. De gebruikersaccountant dient namelijk na te gaan of de gehele procesketen wordt gevolgd, voor de juistheid en volledigheid van de financiële verslaggeving.

Als laatste is het van belang dat de gebruikersorganisatie haar kennis aangaande assurancerapportages actueel houdt en deze ook kan toepassen in de praktijk. Ze moet weten welke elementen van belang zijn voor een gezonde uitbestedingsrelatie en wat ze van haar serviceorganisaties moet vragen. De accountant van de gebruikersorganisatie kan tot op zekere hoogte altijd diens klant hierin adviseren en begeleiden.

Literatuur

[AFM14] Autoriteit Financiële Markten, Uitkomsten onderzoek kwaliteit wettelijke controles Big 4-accountantsorganisaties, AFM, 25 september 2014.

[AFM17] Autoriteit Financiële Markten, Uitkomsten van onderzoeken naar de implementatie en borging van verandertrajecten bij de OOB-accountantsorganisaties en de kwaliteit van wettelijke controles bij de Big 4-accountantsorganisaties, AFM, 28 juni 2017.

[Beek12] J.J. van Beek, Praktijkgids 4; Service Organisatie Control-rapport, ISAE3402, KPMG, 2012.

[NBA17] Koninklijke Nederlandse Beroepsorganisatie van Accountants, 402 Overwegingen met betrekking tot controles van entiteiten die gebruikmaken van een serviceorganisatie, Handleiding Regelgeving Accountancy, 2017.

[Palm16] D. Palmer, Clarified attestation standards – SSAE 18, KPMG, september 2016.

Horizontal Monitoring

KPMG is involved in the introduction of Horizontal Monitoring in hospitals and KPMG also issues assurance reports. This means that serious work is being done on the internal control of hospitals in order to meet the operational requirements. This does not only mean process improvement, but also improvement in the management and use of IT. In this article we explain what Horizontal Monitoring is, and in particular what role IT plays in it and what challenges hospitals are faced with.

Introduction

In the recent past the relationship between Dutch healthcare insurance providers and Dutch hospitals was not based on trust, and it still isn’t. Health insurance companies were checking and reviewing hospital care invoices, resulting in a lot of corrections. And, if that was not enough, a large number of these correction tasks had to be executed manually adding up to the administrative work load of the hospital. The insurer was, in turn, again strictly controlled by the Dutch Healthcare Authority (Nederlandse Zorgautoriteit). The accountability that the insurer had to provide to this authority forms the basis for the strict controls they imposed on the hospitals.

This situation did not benefit the relationship between the hospitals and health insurance companies, causing a lot of frustration and distrust. The Horizontal Monitoring project was set up in order to improve this relationship. The concept of Horizontal Monitoring was introduced in the healthcare sector1 in 2016, primarily for medical specialist care (MSC). It is an initiative of three parties: Nederlandse Vereniging van Ziekenhuizen (NVZ, Dutch Association of Hospitals), Zorgverzekeraars Nederland (ZN, Association of Dutch Health Insurers), and Nederlandse Federatie van Universitair medische centra (NFU, Dutch Federation for University Medical Centers).

The slogan on the official website of Horizontal Monitoring reads: ‘Horizontal Monitoring focuses on the legitimacy of care expenses within medical specialist care. This concerns registering and declaring correctly on the one hand, and the appropriate use of care on the other.’2 The text emphasizes the most important issues that health insurance companies are facing, namely: legitimacy of the invoices and the appropriateness of care. Legitimacy is complicated due to a lot of very specific and detailed rules to which hospitals have to comply. Therefore, within the framework of Horizontal Monitoring, the hospitals are challenged to invoice correctly, which in its turn should lead to fewer controls from the health insurance provider. Additionally a trusted third party is there to issue an assurance report ISAE 3000, that would confirm the measures (all included in a so called control framework) taken by the hospital to mitigate the risks and achieve correct invoicing. This would provide the healthcare insurance provider comfort in order to reduce the usual and necessary checks and reviews.

In this article we would like to shed some light on the concept of Horizontal Monitoring in healthcare, what it has to offer in the day to day practice and, within this concept, what challenges the participating hospitals will face in the area of IT.

Horizontal Monitoring practice

Horizontal Monitoring is not a new concept in the Netherlands. It has already been used for the tax billing process. The basis for Horizontal Monitoring is mutual trust. Cooperation is the key word within the concept of Tax Horizontal Monitoring. By making arrangements between the tax authority and a company, that is the company involved in the reporting process, an improved quality of the tax returns can be maintained and improved. This will prevent unnecessary additional work. Naturally, transparency plays an important role within Horizontal Monitoring. Horizontal Monitoring in health is different, because a hospital makes arrangements with multiple authorities instead of just one (health care insurance provider).

The scope of Horizontal Monitoring is legitimacy. The legitimacy of medical expenses consists of correct registration and billing, and effective care and medical necessity.

Legitimacy is based on three principles:

  1. ensure proper spending of current and future healthcare expenditures;
  2. give account of the social responsibility of these expenses;
  3. provide certainty about these expenses to all chain parties in an efficient, effective and timely manner.

The concept of correct registration and invoicing is a straightforward concept, which is covered by controls put in place by hospitals in order to prevent incorrect invoices to the healthcare insurance provider. The use of a specific set of controls makes it clear and testable.

The effective care and proper use on the other hand has more ambiguous characteristics, and is therefore more challenging to test. For instance, a Diagnosis Treatment Combination (DBC) contains a set of painkillers. Sometimes the patient does not need these (e.g. no pain), but receives them anyway, since it is being covered by the health insurance provider. The prescription for painkillers is unnecessary and hence not appropriate. But to detect and make a ruling of such cases takes time and effort.

DBC

DBC stands for Diagnosis Treatment Combination. A DBC is a care package with information about the diagnosis and treatment that the patient is receiving. In the hospital care and geriatric revalidation care the DBC is also being called a DBC-care product.

Another example of proper use of healthcare is an early discharge of patients after a clinical operation with the use of proper nursing support at home. This form of care is much cheaper than care in hospital.

Due to the presented complexity of the appropriate usage a growth model will apply in the coming years. Starting from 2020 the control framework of ‘appropriate usage’ will be ready as a part of Horizontal Monitoring. During this period the elements of the appropriate usage will be added to the control framework.

Contractual agreements, such as agreements about the quality of the delivered medical care, fall outside the scope of the Horizontal Monitoring, at least for now.

C-2017-3-Tsjapanova-01-klein

Figure 1. The scope of Horizontal Monitoring. [Click on the image for a larger image]

In 2016 a pilot was started with a few hospitals and healthcare insurance providers to implement Horizontal Monitoring. In 2016 and early 2017 two of the pioneering hospitals worked closely with KPMG on the evaluation of their control framework. Together the healthcare insurance providers, KPMG and these pioneering hospitals have carried out extensive work to delve into the control framework evaluation. A specially dedicated working group for Horizontal Monitoring was initiated on the national level, in which obstacles and nuances were discussed, and agreements were made appropriately. The pilot resulted in an assurance report type I (design of the control framework) for these two hospitals. At this time, these two hospitals are in phase two, to put in place the type II assurance report on the operational working of the controls. The other pilot hospitals will follow later.

Benefits of Horizontal Monitoring

Once Horizontal Monitoring has been properly implemented, it results in major benefits for the involved parties. Firstly and most importantly, the relationship between the hospitals and healthcare insurance providers will be improved, which should be the basis for a higher level of trust. These two parties will be able to work together towards the solutions, instead of finger pointing (principle one of Horizontal Monitoring is after all well-founded trust).

C-2017-3-Tsjapanova-02-klein

Figure 2. Ten principles of Horizontal Monitoring. [Click on the image for a larger image]

Secondly, the healthcare insurance providers will withdraw controls on the invoices submitted by the hospitals, resulting savings in resources due to a lower control load.

Thirdly, for the hospitals, the discussion with the healthcare insurance providers about the correctness of the invoices will become easier because less errors will be made. This way they can focus on improving the process of registration and invoicing, preventing the possibility of incorrect registration even further. This means achieving the first-time-right registration.

Fourthly, another benefit of Horizontal Monitoring is that the internal processes and mainly process inefficiencies are becoming more visible to the hospitals. That means that there is room for improvement and the use of more automated controls instead of manual ones.

A lot of the manual controls can be automated and even be executed in real time mode, slowly but surely shifting the focus from detecting registration and billing errors to improving the registration process. Hospitals can implement projects that will contribute to improving the registration process. This will further contribute to the first-time-right registration, correct registration at the source, and well-managed IT systems and solutions.

Finally, when Horizontal Monitoring is completely settled, less monitoring will be required. The healthcare insurance providers might settle for a looser form of assurance, depending on the policies and agreements they made with the national supervisory authority.

The starting point for hospitals to achieve these benefits is admitting and facing the fact that real cultural and organizational changes are necessary to ensure correct and complete registration (first time right).

Horizontal Monitoring challenges

An assurance report from an independent auditor is more than merely testing the risks and control measures that the hospitals and insurance companies agreed upon, the reliance on IT becomes crucial, hence the topic of IT.

In order to address the different topics in Horizontal Monitoring, a so called entry model was introduced. The function of this model is that in order for a hospital to participate in Horizontal Monitoring, a certain level of organizational maturity and internal control should reached. Based on the COBIT and COSO principles, the Horizontal Monitoring entry model provides a 5-level3 scoring system on topics such as strategy, enterprise stakeholders, soft controls, and the General IT Controls (GITCs). The hospital should score at least a level ‘3’ (on average), with no score of ‘1’ on any of the topics, in order to be able to enter the Horizontal Monitoring process with the healthcare insurance provider.

A lot of the control measures within the process of registration and invoicing can either be manual or automated. When speaking of the automated controls, the role of the IT components becomes very important. On the one hand the automated controls are time and manpower saving, but on the other the GITCs should provide a certain level of control in order to be able to rely on these automated control measures.

Challenges

One of the challenges that hospitals are facing is being in control of IT, there where the current state of IT maturity in hospitals is rather low. Therefore, a lot of processes and tasks are executed manually or rely on many manual actions in processes.

In the next section we will focus on the parties that can play an important role in helping the hospitals to achieve a higher level of IT maturity.

The cast of Horizontal Monitoring

In this section we will name the actors in Horizontal Monitoring and their roles within Horizontal Monitoring. So far we have already introduced insurance companies and hospitals. Along with these parties we also have the auditors and the software providers. We will elaborate on each of them separately.

Auditors

As described above the management of a hospital is responsible for the correct, complete and timely registration and billing of the delivered care. The risks that have to be considered, must be mitigated by adequate control measures (first and second line of defence). The management will then be able to declare to the medical insurance companies that they have set up the system of control measures adequately and that it also functions as such, and that in this way the risks are under control. The first role of an external auditor is therefore to carry out the earlier mentioned assurance investigation on this system of internal control measures, so that additional security is provided to the medical insurance company.

Furthermore there is a possibility where a hospital has a so-called ‘third line of defence’. This line of defence is usually carried out by an Internal Audit Department (IAD) or a special Internal Control Department (ICD). An external auditor can then still carry out an assurance investigation, for which the character of his activities will be different, because they then focus more on the execution of the work activities of this ‘third line of defence’.

A third role that can be carried out by the auditor lies closer to that of advisor, for which the auditor can support the hospitals with the effective structuring of the system of internal control measures for Horizontal Monitoring. In particular, the IT Auditor can offer added value by recognizing and structuring more automated control measures (application controls).

Software providers

Currently we see that the software suppliers of the ZIS-EPDs (electronic patient record system) do not program their software solutions in an explicit and structured manner with specific automated control measures focused on covering the risks within Horizontal Monitoring. This terrain is still undeveloped. There is no common national framework in which the various automated control measures are included. Therefore, it is understandable that the software developers do not exactly know which application controls they should develop.

If we want to help the hospitals reduce the pressure of internal controls, particularly with the implementation of Horizontal Monitoring, there will have to be a broadly supported framework containing the maximum possible set of automated control measures. So implementing this in the registration and billing processes at the hospitals automatically contributes to the ‘first time right’ principle, because incorrect registrations are hardly possible anymore.

There is an important role here for software developers to program these automated controls that support the control framework, but there is also a clear task for the IT Auditors (NOREA) to develop a broadly supported management framework that serves as a foundation for the software developers. This still to be developed framework can then be the basis for a quality mark that software developers can achieve if they have incorporated automated control measures for Horizontal Monitoring.

The improvements in IT

When there is a more efficient relationship between automated and manual controls at hospitals, the question concerning the quality of IT General Controls becomes even more important. The correct functioning of the IT general controls is of course a condition for the reliable operation of these automated measures. As we have already mentioned above, most hospitals will have to make improvements in this area.

So, have we achieved this with Horizontal Monitoring? In our opinion the answer is ‘no’. The next step certainly lies in improving the system of internal control measures, making use of more automated controls. In addition, developments in IT will influence Horizontal Monitoring.

Future developments

We also expect that new IT technologies (robotics, eHealth etc.) will play a role. There are currently ongoing ‘proof of concepts’ during which intelligent and learning software (machine learning) scan the EDPs via ‘text mining’, and on the basis of all this data independently derive the diagnosis and the provided care. The first tests are promising, and the time when the quality of this is better than the manual registrations by the care professional is close at hand. This is good news, because the care professional must provide ‘care’. We then leave the management to robots. If this improves the quality of registration and billing and reduces the time that professionals spend on administrative tasks, then nobody can be against this. Horizontal Monitoring can then be seen in a new light with these future perspectives.

Conclusion

Horizontal Monitoring is and will remain a challenge. At this moment the involved parties are struggling to state whether Horizontal Monitoring is a blessing or a curse. However, once this stage is over, they will benefit greatly from this project.

A few citations of Henry Ford

‘Don’t find fault, find remedy.”

Right now the main goal is to prevent the risks for errors, but eventually it should be improving the process to attain better risk prevention.

‘Quality means doing right when no one is looking.’

Eventually when the hospitals and the healthcare insurance providers have stabilized the Horizontal Monitoring path, the hospitals will be able to process the correct registration and billing on their own, reducing the need for constant monitoring and opening the door for improvement.

Notes

  1. Further in the text we only talk about the Horizontal Monitoring in healthcare.
  2. The quote is translated from www.horizontaaltoezichtzorg.nl.
  3. The categories are from one to five: 1) initial; 2) informal; 3) standardized; 4) controlled; 5) optimized.

RegTech: closing the circle

RegTech is the term used for technology that facilitates the delivery of regulatory requirements more efficiently and effectively than existing capabilities ([FCA15]). It is thought that regulatory oversight and compliance will benefit tremendously from current technological progress by letting technological breakthroughs tackle the rise in volume and complexity in legislation. RegTech is seen as a new industry with the potential to lower costs for existing stakeholders, break down the barrier of entry for the new FinTech players, eliminate existing barriers with the regulatory lifecycle and possibly enable a (near) real-time regulatory regime.

Introduction

The New York Times Magazine article ‘The great A.I. awakening’ ([Lewi16]) tells the story of how cognitive technology revolutionized the Google Translate service, significantly improving the quality of its output. Translation has traditionally been a highly skilled profession, requiring among others language skills, domain know-how and creativity. Until Google embraced Artificial Intelligence (A.I.), the Google Translate service, introduced in 2006, had improved steadily, but spectacular advances hadn’t been made in recent years. Subsequently, the service is used to support the translation of text, but has so far not replaced ‘professional’ translators. However, the improvements in translation introduced in November 2016 have made such an impact that users of the service have started to challenge each other to recognize a Google translation from a ‘professional’ translation. Google’s chief executive Sundar Pichai explained during the opening of a new Google building in London in November 2016 that the progress was due to an ‘A.I. driven’ focus and that the development responsible for the gain in performance took only 9 months.

One year earlier in November 2015 the FCA, the UK Financial Conduct Authority, openly stated its support for the use of new technology to ‘facilitate the delivery of regulatory requirements more efficiently and effectively than existing capabilities’ ([FCA15]). The set of technologies that the FCA expects to revolutionize the regulatory domain are called RegTech. Similar to Google, new technologies such as A.I., machine learning and semantic models are seen as the driver to ‘predict, learn and simplify’ regulatory requirements ([FCA16]). Will RegTech have the same impact for regulatory compliance as A.I. had for Google Translate, or is RegTech a media hype?

This article provides a concise introduction into RegTech and in particular the Semantic Web. It describes the current role of the Semantic Web within the domain of regulatory compliance by focusing on two important requirements: 1) accessing legislative documents and 2) referencing legislative documents. The article makes use of examples of European regulation for the financial sector, but can be used to address legislation issues in general.

Regulatory Technology (RegTech) overview

The UK Financial Conduct Authority (FCA) was the first supervisor to use RegTech in an official document in 2015. It made a ‘call for input’ from all regulatory stakeholders in November of that year that resulted in a feedback statement published in July 2016 ([FCA16]). In this report, the responses of over 350 companies were clustered using the following four main RegTech topics, describing the technology/concepts applicable to tackling regulatory challenges:

  1. Efficiency and collaboration: technology that allows more efficient methods of sharing information. Examples are shared utilities, cloud platforms, online platforms.
  2. Integration, standards and understanding: technology that drives efficiencies by closing the gap between intention and interpretation. Examples are semantic models, data point models, shared ontologies, Application Programme Interfaces (API).
  3. Predict, learn and simplify: technology that simplifies data, allows better decision making and the creation of adaptive automation. Examples are cognitive technology, big data, machine learning, visualization.
  4. New directions: technology that allows regulation and compliance processes to be looked at differently. Examples are blockchain/distributed ledger, inbuilt compliance, biometrics.

The topics identified by the respondents to the call for input provide an overview of concepts that can be used to achieve efficient regulatory compliance. Specifically regarding access to and identification of regulation, the respondents made additional suggestions:

  • define new (and existing) regulations and case law in a machine readable format;
  • create consistency and compatibility of regulations internationally;
  • establish a common global regulatory taxonomy.

Similar RegTech studies by other organizations, such as the Institute of International Finance ([IIF16]), have more or less identified the same technologies to support regulatory compliance. The question remains: what is the current status of these technologies? Have these technologies been applied already within the domain of regulation? Is RegTech a case of ‘old wine in new bottles’? To examine this, it is useful to look at the regulatory lifecycle itself, i.e. those steps that every company has to follow to comply to rules and legislation.

The Regulatory lifecycle

Much has been said about the nature, size and complexity of financial sector regulation before and after the crisis of 2008. Many of the newly imposed rules address issues identified since then. In addition, the regulators have introduced or are in the process of introducing legislation, rules and standards specifically to cover technological developments, such as: legislation and/or guidelines for High Frequency Trading (HFT), automated/robo advice, digital payments, the distributed ledger technology and digitization in general. Thus, although the crisis itself is almost ten years old, introduction of new legislation or recasts of older rules continues and has become a way of life. Legislation itself can be discussed using four phases: 1) initiation, 2) discussion/consultation, 3) implementation/enforcement and 4) in effect (see Figure 1).

C-2017-2-Voster-01-klein

Figure 1. Regulatory Horizon Financial Sector ([KPMG17]). [Click on the image for a larger image]

However, it is generally agreed that legislation by itself is not the key to lower risks and a healthy economic environment. The ability of the stakeholders (industry and supervisors alike) to identify, assess, implement, comply to and monitor regulatory obligations is also important (see Figure 2). Early identification of and access to legislation is vital to enable those responsible to meet their responsibilities. It allows the stakeholders to provide feedback to legislators and authorities involved in writing and amending legislation. The next step in the regulatory lifecycle starts once a regulation has been officially accepted by the legislative bodies and consists of an impact and gap analysis by relevant organizations and supervisors. Following this assessment, stakeholders have to implement the identified requirements and address the resulting gaps. Once the implementation has taken place, industry and supervisors have to monitor regulatory compliance and report to the supervisors where required, thus closing the circle.

C-2017-2-Voster-02-2

Figure 2. The Regulatory Lifecycle.

A regulatory lifecycle is determined by economic and political events and by the formal regulatory process as defined by the legislator (see the box ‘The Lamfalussy process’).

The Lamfalussy process

Development of EU financial service industry regulations is determined by the Lamfalussy process, an approach introduced in 2001 (see Figure 3). The Lamfalussy process recognizes four levels (1 to 4). It may take up to ten years before all acts, standards and guidance required by the four levels are drafted, discussed, approved and enforced.

C-2017-2-Voster-03-2-klein

Figure 3. The Lamfalussy process. [Click on the image for a larger image]

The initiation of numerous applicable regulations in combination with an elaborate regulatory process means that regulation requires management attention, creativity, technological and human resources plus significant capital expenditure by all stakeholders. Not to comply is not an option, as non-compliance has legal and public consequences for financial companies such as reputation, trust and penalties. Therefore, stakeholders have to get it right. First and foremost it is important to identify, access and capture the correct requirements from the regulations: ‘show me the boy and I will show you the man’. However, there are barriers to overcome; the most significant are discussed in the next section.

Barriers to access, interpretation and linkage

What are the barriers to legislation? First of all, there is the question of copyright. Recent research has shown that ‘the current uncertainties with respect to the copyright status of primary legal materials (legislation, court decisions) and secondary legal materials such as parliamentary records and other official texts relevant to the interpretation of law, constitute a barrier to access and use’ ([Eech16]). The same paper states ‘that legal information should be ‘open’. It seems that the strong focus on licenses as an instrument to ensure openness has buried the more fundamental question of why legal information emanating from public authorities is not public domain to start with, doing away with the need for licenses’ ([Eech16]). Europe has addressed this issue in the Public Sector Information (PSI) directive. This directive provides a common legal framework for a European market for government-held data. The consistent application of the directive in national legislation remains an issue.

Apart from the copyright issue, there is the barrier due to the differences in legislation between individual EU member states and the differences between the European Union and member states. The language of the legislation is an obvious difference. Legislation at member state level is published in the official language(s) only. There is not a single common language that is supported by all member states.

The development of legislation at member state level differs significantly as well. There is no common legislative process, like the Lamfalussy process, and no compatible legislative model shared between member states. For example, in Germany detailed rules are mostly written directly in the legislation. However, UK legislation is far less detailed and companies look at what the UK prudential authority (PRA) and the conduct authority (FSA) publish in their respective rulebooks.

Member states also make use of ‘gold-plating’. Gold-plating is defined by the European Commission as ‘an excess of norms, guidelines and procedures accumulated at national, regional and local levels, which interfere with the expected policy goals to be achieved by such regulation’ ([EC14]). Gold-plating undermines the ‘single market’ principle of the EU and the level playing field within one European market. The practice of gold-plating is a barrier to access to national markets, it restricts scalability and it creates extra costs for cross-border companies.

The question of linkage remains. Current legislation is characterized by numerous direct and indirect references to other international, regional and national legislation, policies and standards (see Figure 4). This is due to different causes.

C-2017-2-Voster-04-klein

Figure 4. Linkage European Legislation. [Click on the image for a larger image]

One of the causes of linkage is the actual legislative process. The European legislative process, including the Lamfalussy process, supports the concept of a legislative directive and a regulation. A regulation applies directly to all member states and is binding in its entirety, whereby a directive must be transposed to national law and is binding with respect to its intended result only. The consequence of such a legislative model is that national legislative text and policies refer or link directly to applicable European regulations.

Another cause of legislative linkage is the initiation of legislation itself. Initiation is a political action and politics occurs at all levels. A large number of international financial sector legislative initiatives are agreed upon at G20 meetings. Other international agreements on environmental, military, human rights or other topics are developed in similar multilateral or bilateral meetings. Consequently, many international organizations play a role in developing legislation or standards resulting in complex, intertwined legislation and rules. This is reflected in the number of organizations that play a role in drafting or monitoring these rules and regulations. The following is a non-exhaustive list of just a few organizations that play a role in the lifecycle of financial sector legislation: the Financial Stability Board (FSB), the International Organization of Securities Commissions (IOSCO), Bank For International Settlements (BIS), the World Bank, the International Monetary Fund (IMF), the Organization for Economic Co-Operation and Development (OECD), the European Central Bank (ECB), the European Commission and the three European supervisory authorities: ESMA (Securities and Markets), EBA (Banking) and EIOPA (Insurance and Pensions).

Access to Regulatory Requirements

In general, despite the above issues, it can be said that the many international, European and national legal resources are currently accessible via the Internet, be it via many different channels. The main access point for EU (financial) regulation is ‘EUR-lex’. EUR-lex is an Internet based service that can be accessed using a browser, RSS feeds or web-services and supports multiple formats like HTML, PDF and XML. National regulation can be accessed via the EU service N-lex. Furthermore, EUR-Lex also supports the search for specific national transposition measures.

Access to European Union law within EUR-Lex is simplified by the support of CELEX (Communitatis Europeae LEX) numbers. The CELEX number is the unique identifier of each document in EUR-lex, regardless of language. An EUR-lex document is allocated a CELEX number on the basis of a document number or a date. Documents are classified in twelve sectors and CELEX allows for the identification of such a sector. Examples of sectors are: sector 1 – treaties, sector 2 – international agreements, sector 3 – legislation, section 7 – national implementing measures, etc. The sector also defines the type of document supported.

The example of the water framework directive (see Figure 5) clarifies the format of a CELEX number. The water framework directive is a legislative document, hence sector 3, accepted in the year 2000. The legislation sector supports three types of documents: a regulation (R), a directive (L) and a decision (D). In the case of the water framework directive, the CELEX number assigned is thus: 3 (sector), 2000 (year), L (for directive) and 0060 (a sequential document number).

C-2017-2-Voster-05

Figure 5. CELEX.

At the member state level all European member states have their own access channels and data formats. For example, in the Netherlands access to and search of legislation is provided via wetten.overheid.nl. In addition, related policies and standards for the financial sector, drafted by the Dutch Central Bank (DNB) and the Dutch Authority for the Financial Markets (AFM), are available via www.toezicht.dnb.nl and www.afm.nl/en/professionals/onderwerpen. Unfortunately, support for other languages but Dutch is limited. The actual legislation (wetten.overheid.nl) is available in Dutch only. Some financial topics are published by the AFM in English. The Dutch Central Bank provides a best practice, as English language versions of all topics covered in their ‘open book supervision’ are available.

The UK has a similar set-up to the Netherlands, with legislation being made available at www.legislation.gov.uk and prudential and business conduct rulebooks available via www.prarulebook.co.uk and www.handbook.fca.org.uk respectively. No other language support but English is provided.

All in all, access to regulation, standards, rules and related policies is possible but highly fragmented. A unique number per document such as CELEX is only used within EUR-lex. The exchange of data relating to legislation is severely hindered by local differences between legal systems at national, regional (EU) and international level. The next sections describe the Semantic Web and related initiatives. The Semantic Web, a set of technology standards that together support access and data formats, enable seamless digitization of law. The European and international initiative address the issue of direct and indirect linkage between the different types and levels of legislation by embracing the Semantic Web.

The Semantic Web

Standardization to access and reference legal documents is underway at both an international and European level. The goal of these initiatives is to allow for harmonized and stable referencing between national and international legislation and enable faster and more efficient data exchange, navigation, search and analysis. The majority of the initiatives build upon the concepts of the Semantic Web (see Figure 6).

C-2017-2-Voster-06-klein

Figure 6. The Semantic Web. [Click on the image for a larger image]

The Semantic Web is a concept developed by Berners-Lee, famous for inventing the world wide web and founder of the World Wide Consortium (W3C): the forum for technical development of the Web. The Semantic Web can be defined as ‘an extension of the current Web in which information is given well-defined meaning, better enabling computers and people to work in cooperation’ ([W3C17]). The world wide web is dominated by HTML, which focuses on the markup of the information, allowing for links between documents. Semantic web standards improve on this by focusing on the meaning of the information by supporting among other the concepts of metadata, taxonomy and ontology.

The difference between taxonomy and ontology

The terms taxonomy and ontology are often misunderstood and confused. A taxonomy typically classifies groups of objects that may include a hierarchy, such as the parent-child relationship. An ontology is a more complex structure that in addition to a classification describes the group of objects by naming the attributes or properties of the groups and the relationships between the attributes and groups. The ‘Nederlands Taxonomie Project’ (NTP) is an example of a Dutch taxonomy initiative based on XBRL in the accounting, tax and statistics domain.

The Semantic Web is a stack of complementary technologies (see Figure 6) and supports a wide range of concepts. The bottom layer uses two standards: the Unicode standard to provide the electronic standard to represent characters and the Uniform Resource Identifier (URI). A URI allows the naming of entities accessible on the web. The second layer is based on XML. XML is a language that supports the definition of tags to express the structure of a document and the concept of metadata allowing automated processing. The Resource Descriptor Framework (RDF) standard in the third layer enables the inclusion in documents of machine readable statements on relevant objects and properties. An RDF schema supports the concept of taxonomy (see the box ‘The difference between taxonomy and ontology’). Ontologies, the specification of concepts and conceptual relations, and rules can be defined using Rule Interchange Format (RIF) and Web Ontology Language (OWL). The top three layers of the Semantic Web support logic, the expression of complex information in formal structures, the use of this logical information and the concept of trust (confidentiality, integrity, authenticity and reliability) ([Sart11]). On top of the Semantic Web we find the applications and user interface that makes use of the underlying stack.

All in all, the Semantic Web facilitates access to information, separates content from presentation, supports references to other documents that support the Semantic Web and allows information such as metadata, taxonomy and ontology to be used by other applications and systems.

Akoma Ntosa and the European Legislation Identifier

There are two major but currently incompatible initiatives to harmonize access and references between legal documents based on the Semantic Web. The international initiative is supported by the United Nations and is called ‘Akoma Ntoso’. The European Commission has thrown its considerable weight behind a European initiative called the European Legislation Identifier (ELI).

Akoma Ntoso (‘linked hearts’ in the Akan language of West Africa) defines a set of technology-neutral electronic representations in XML format of parliamentary, legislative and judiciary documents ([Akom16]). The main purpose of the Akoma Ntoso is to develop and define a number of connected standards, to specifically define a common standard for the document format and document interchange, a schema and ontology for data and metadata (see Figure 7), plus a schema for citation and cross referencing.

C-2017-2-Voster-07-klein

Figure 7. Legislation and metadata. [Click on the image for a larger image]

There are plenty of examples of how and where the Akoma Ntoso standard is used, such as:

  • the European Parliament: to document amendments on the proposals of the European Commission and the Council of the European Union, and the reports of the parliamentary committees;
  • the Senate of Italy and the Library of Congress: for the publication of legislation.

Based on Akoma Ntoso, the Organization for the Advancement of Structured Information Standards (OASIS) has started the LegalDocML project. One of the main supporters of this project is Microsoft.

The European system to make legislation available online in a standard format started officially in 2012 and is called the European Legislation Identifier (ELI). The purpose of the ELI is to provide access to information about the EU and member state legal systems ([RDK16]). The introduction of ELI is based on a voluntary agreement between these EU member states, and its goals are similar to Akoma Ntoso, but with a European focus. ELI provides for a harmonized and stable referencing of European and member state legislation, a set of harmonized metadata and a specific language for exchanging legislation and case law in machine-readable formats.

ELI is supported by the following eight EU member states: Denmark, Finland, France, Ireland, Italy, Luxembourg, Norway, Portugal (and the United Kingdom). All nine countries have representatives in the ELI taskforce. In addition to the member states, the task force is also supported by the EU Publications Office.

Both Akoma Ntoso and ELI create a number of potential benefits ([RDK16]) ([AKO16]):

  • improving the quality and reliability of legal information online;
  • enabling easy exchange and aggregation of information by supporting indexing, analyzing and storing documents;
  • understanding of the relationships between national and regional/international legislation;
  • encouraging interoperability among information systems by structuring legislation in a standardized way, while taking account of the specific features of different legal systems;
  • making legislation available in a structured way to develop value-added services while reducing the need for local/national investments in systems and tooling;
  • shortening the time to publish and access legislation;
  • making it easier to follow up work done by governments, and promote greater accountability.

Overall both standardization initiatives support the concept of interoperability, transparency and allow for the development of more intelligent cognitive applications for legal information ([RDK16]).

Conclusion

The use of RegTech to bundle and identify technologies that can benefit regulatory compliance is a good initiative. It provides support to meet the obligations and allows all stakeholders to discuss the applicability of these technologies. The use of the Semantic Web to access, navigate and control legislative rules and regulation and encourage interoperability among related IT systems, shows that progress has been made within the regulatory domain. The Semantic Web can support the identification of legislation in a consistent manner, supports the description of the legislation using a coherent set of metadata and publish legislation and case law online in a machine-readable form.

The added value of legislative information made accessible in this way allows simple queries to be made that retrieve legislations from various legislative sources, helping national and cross-border organization to gather a complete set of relevant regulatory obligations. Metadata can be used to produce transposition timelines by type of law.

In general, the use of the Semantic Web in combination with legislative documents allows organisations to carry out control and compliance activities more efficiently in complex legal domains, such as the financial sector, where there is a need to consult numerous laws, regulations and standards from different jurisdictions ([ELIW16]).

References

[Akom16] Akoma Ntoso, XML for parliamentary, legislative & judiciary documents, March 2017, www.akomantoso.org.

[EC14] European Commission, Gold-plating in the EAFRD, Directorate General For Internal Policies, 2014.

[Eech16] Mireille van Eechoud and Lucie Guibault, International copyright reform in support of open legal information, working paper draft, September 2016.

[ELIW16] ELI Workshop, 2016.

[FCA15] Financial Conduct Authority, Call for input on supporting the development and adopters of RegTech, November 2015.

[FCA16] Financial Conduct Authority, Feedback Statement (FS16/4): Call for input on supporting the development and adopters of RegTech, July 2016.

[IIF16] The Institute of International Finance, RegTech in Financial Services: Technology Solutions for Compliance and Reporting, March 2016.

[KPMG17] KPMG, The regulatory horizon, March, 2017, http://blog.kpmg.nl/regulatory-horizon.

[Lewi16] Gideon Lewis-Kraus, The Great A.I. Awakening, The New York Times Magazine, November 14, 2016.

[RDK16] Retsinformation.dk, Easier access to European legislation with ELI, March 2016, www.retsinformation.dk.

[Sart11] G. Sartor, M. Palmirani, E. Francesconi and M.A. Biasiotti (Eds.), Legislative XML for the Semantic Web, Springer, 2011.

[W3C17] W3C, The Semantic Web Made Easy, W3C.org, 2017, www.w3.org.

Technology-Enabled Internal Audit

Are internal audit departments effectively responding to emerging risks and opportunities? And how can they play a key role in accelerating the firm’s digital revolution? Accompanied by two use cases we argue how the internal audit can accomplish this by embedding technology at the core of the audit approach through the integration of data analytics and the effective use of cognitive analytics.

Introduction

In early 2016 KPMG and Forbes surveyed ([Loon16-1]) more than 400 chief financial officers and audit committee chairs and found that 90% of the respondents feel their internal audit function is not adequately identifying and responding to emerging risks. The common theme was that internal audit (IA) needs to be more proactive in identifying and mitigating risk, not just assessing the controls already in place. And one of the identified challenges for internal audit is to use more technology, data and analytics in their audit approach and methodology.

It is no longer useful to utter phrases like ‘technology is the future’. If companies are not fully integrating technological advancements in every area of business, no degree of strategic prowess is going to make a measurable impact. How internal audit is conducted is no exception and technology lies at the core of how to manage current and emerging risks ([Loon16-2]).

The question is how can IA embed technology in its audit approach and methodology and support their organization to effectively respond to emerging risks and opportunities but accelerate its digital revolution? Below we discuss two themes that lie at the core of this question.

C-2016-4-Maes-01

Figure 1. Pro-active demands on the organization.

C-2016-4-Maes-02

Figure 2. Key skills required for the internal auditor.

Fully integrated data analytics

In the past few years, data analytics have helped to revolutionize the way in which companies assess and monitor themselves, especially in terms of efficiently expanding the scope of audits and improving the level of detail to which audits can be performed. Data analytics and continuous auditing can help IA departments deal with the increasingly complex environment by improving their audit processes, resulting in higher quality audits and tangible value to the business. Consider the traditional audit approach, which is based on a cyclical process that involves manually identifying control objectives, assessing and testing controls, performing tests, and sampling only a small population to measure control effectiveness or operational performance. Contrast this with today’s methods, which use repeatable and sustainable data analytics that provide a more thorough and risk-based approach. With data analytics, companies have the ability to review every transaction (not just samples) which enables more efficient analysis on a greater scale. For example, if an internal audit function provides assurance for a data transaction set recorded in a system (i.e. invoices), the audit opinion is no longer based on a sample review (e.g. 30 out of 100.000 invoices), but on every transaction meaning the whole population of 100.000 invoices.

An integrated approach to using data and analytics throughout the audit process (for example, analytics driven continuous auditing, dynamic audit planning, audit scoping and planning, audit execution and reporting) would provide greater insights and value.

In fact, a fully integrated, automated Internal Audit platform would transform and progress the way that audits are conducted. As for the existing desire for a move toward such a technology-enabled approach when asked about the key skills needed in internal audit, the survey reflected that technology (62 percent) is second only to communication (67 percent) in importance, while critical thinking and judgment came in third (52 percent). It stands to reason that a solid technology platform with the propensity for advanced, enterprise-wide data and analytics, and a progressive feedback mechanism would make for a distinctly efficient and effective internal audit function. The added value of Internal Audit can be increased if it can integrate a higher percentage of data analytic procedures into their audit approach.

Currently, 63 percent of companies use data and analytics technology in isolated or specific instances only, or it exists within discrete functions. It is predicted that this statistic will drop to 50 percent in the next three years, while the use of enterprise-wide risk-focused D&A capabilities will increase from 35 percent to 47 percent.

If IA were to operate through an integrated technology platform, the incorporation of risk assessment, D&A, knowledge and experience would advance the potential for IA to deliver significant added value, particularly in monitoring emerging risk, assessing risk coverage and facilitating data-driven decisions to provide actionable insights into the strategic drivers of the business that would optimize both business performance and risk mitigation.

This platform would provide dynamic, near-real time reporting that unlocks the intellectual capital of a business, exposes the root cause of problems and enables internal auditors to help deliver not only added value, but measurable value. This would go a long way toward enhancing the status of Internal Audit in businesses, going so far as to create a model that may, in time, become the standard.

We noted that the survey responses not only identify a ‘value gap’, but that they also point to specific actions that would elevate the value of the Internal Audit function and create a new standard of delivery by:

  • providing actionable insights into the risks that matter and increase the focus on emerging risks;
  • embracing technology and the benefits of D&A to increase audit quality, improve the quality of audit evidence and facilitate the discovery of new insights;
  • leveraging an audit management platform that automates significant portions of Internal Audit service delivery and allows consistent and full execution of your Internal Audit methodology.

C-2016-4-Maes-04

Figure 3. Data and analytics deployment now and in the future.

Randstad

KPMG is currently assisting Randstad, a multinational human resource service provider, with implementing a global data analytics methodology and platform for their internal audit & risk department. Randstad’s vision is to set up sustainable analytics taking into account the requirements from an auditors perspective. Contrary to common perception the key challenges for such an endeavor do not lie with which extraction tools, analytics engine and visualization environment to use.

So what makes such an undertaking difficult? Randstad operates local firms in 39 countries, each of which has a large degree of autonomy to enable it to cater to the needs of the local market. However this has led to a diverse landscape of internal processes and information systems, each of which have their own data model and architecture. An additional layer of complexity is added by regulations regarding data privacy, payroll, tax and data export which vary per region and can be quite stringent.

To address these difficulties a robust and standardized data analytics audit methodology has been developed to accompany the new analytics platform. At its core lies a global data model and a set of guiding principles regarding data transfer, data privacy and data retention. It is operationalized through an audit execution plan that puts a heavy focus on bringing together critical knowledge and key stakeholders from the local firm and the global internal audit department to understand the local processes, systems and data models, to map local data to the global model, to assure the gathering and transfer of data occurs according to the principles and to manage and assure the quality of the results from the analytics platform.

The implementation of this platform has touched almost every aspect of the internal audit department of Randstad. To manage the scope of such an undertaking the implementation happens on a process by process basis. It started with a pilot on payroll and has now expanded into other core processes for Randstad: pricing, travel & entertainment and bank transactions.

C-2016-4-Maes-05

Figure 4. Efficiency of a data driven internal audit. (by Randstad and KPMG)

Through this implementation the internal audit department not only increased its audit quality and increased its efficiency but it has also effected process improvements at local firms just by implementing its data driven methodology.

Cognitive analytics

Unprecedented advances in data capacity, data diversity, processing power and cognitive analytics will continue to transform the business landscape with impact across the entire organization. An internal audit function should not just magnify what the company already knows, but must present new findings, offer new perspectives, and provide new ways of gleaning such insights. Cognitive analytics provide the means to do this.

Cognitive analytics is synonymous with artificial intelligence. At its core it is the use of machine learning and natural language processing to mimic the extraordinary capacity of the human brain to draw inferences from existing data and patterns and create self-learning feedback loops. Using the awesome (distributed) computing power available to use today we can raise this ability to unprecedented levels by continuously monitoring large volumes of unstructured data.

Internal auditing is particularly suited to cognitive technology because of the increasing challenge to tackle immense volumes of structured and unstructured data related to a company’s information: internal and external; financial and non-financial.

C-2016-4-Maes-06

Figure 5. Immense volumes of structured and unstructured data. (by KPMG)

Watson

Watson is a powerful cognitive analysis system developed by IBM. IBM and KPMG have partnered up to employ Watson to assist firms in effectively using cognitive analysis. KPMG developed a prototype using Watson to enable an internal audit department to re-evaluate commercial mortgage loan credit files including its unstructured data. The prototype processes the entire credit file for each loan along with relevant external information.

By teaching Watson the loan grading process used by KPMG and by using a sample set Watson learned to identify key elements that impact the loan grade. After this ‘training’ phase Watson was able to evaluate loans and provide a loan grade with a certain confidence level. When relevant, Watson also provided supporting information extracted from the credit file without providing confidential client information. Using a feedback loop this prototype will continue to learn elements that impact the loan grade and this will increase its confidence levels.

With this prototype thousands of loan files can be re-evaluated in an instant, significantly reducing manual labor and decreasing human inconsistencies and error.

C-2016-4-Maes-07

Figure 6. Artificial Intelligence.

Especially the analysis of external sources has always seem a daunting tasks. Luckily natural language processing algorithms have evolved to a point where they are very effective at analyzing publicly available media including news outlets and social media. This has empowered organization to identify relevant signals of change in the external environment. One such example is the Cyber Trends Index that was discussed in the last issue of Compact ([Harr16]). It has been developed by KPMG and Owlin and has been made available to the public.

Using such solutions organizations can determine if a signal is becoming a trend and whether it should be considered as a risk or as an opportunity. Once you have identified these risks and opportunities an internal audit department can empower the business by pro-actively identifying new trends, determining the implications and pre-empting exposure to new risks.

Conclusion

More and more organizations have shown that by embedding data analytics and making use of advanced techniques such as cognitive analytics the internal audit can transform to a technology-enabled department that empowers the entire organization in pro-actively and effectively identifying, managing and even capitalizing on emerging risks and opportunities. And with support from technological innovators and your trusted partners your internal audit has the means to accelerate the digital revolution of your entire firm.

References

[Harr16] A. Harrak MSc et al, What’s Trending in Cyber Security?, Compact 2016 (3), https://www.compact.nl/articles/whats-trending-in-cyber-security/.

[KPMG17] KPMG, Internal Audit: Top 10 Considerations for 2017, KPMG, 2017, https://assets.kpmg.com/content/dam/kpmg/nl/pdf/2017/advisory/KPMG-IA-Top-10-considerations-2017.pdf.

[Loon16-1] B. van Loon RA, H. Chuah RA RO, Seeking value through Internal Audit, KPMG International, 2016.

[Loon16-2] B. van Loon RA et al, Internal Audit: Top 10 key risks in 2016, Institute of Internal Auditors, 2016, https://www.iia.nl/-kpmg-internal-audit-top-10-key-risks-in-2016/.

Fact-based Project Quality Assurance kun je niet alleen

Recent onderzoek geeft aan dat nog steeds slechts een klein deel van de projecten binnen tijd en budget met succes wordt afgerond. Organisaties die Project Quality Assurance als middel inzetten bij hun (ERP-)projecten hebben een hogere slagingskans om het project binnen tijd en budget en met de afgesproken kwaliteit af te ronden dan projecten waarbij geen QA is uitgevoerd. Veel Project Quality Assurance-trajecten focussen zich primair op de project controls en deliverables en minder op de kwaliteit van de project deliverables. Laat staan dat de kwaliteit van de deliverables of systemen fact-based wordt vastgesteld. Hierdoor kan het gebeuren dat project controls effectief worden getest terwijl het gewenste resultaat niet is behaald. Dit artikel betoogt de inzet van specialisten om de focus binnen Project Quality Assurance te verleggen van control-based testen naar het feitelijk en gegevensgericht beoordelen van een project. Hierdoor is de auditor eerder en beter in staat een goede risico-inschatting te maken. Een ervaren (IT-)auditor zal echter concluderen dat ook hij specialisten nodig heeft om echt verschil te kunnen maken voor zijn stakeholders.

Inleiding

Uit recent onderzoek blijkt dat slechts 20 procent van de projecten binnen tijd en budget met succes wordt afgerond ([Stan13], [Stan14]). In de literatuur worden hiervoor verschillende oorzaken en best practices gegeven. KPMG ([Donk12]) noemt bijvoorbeeld deze drie hoofdoorzaken:

  1. Complexiteit en risico’s van projecten worden geregeld onderschat, opbrengsten, tijdslijnen en eigen kunnen worden overschat.
  2. Er wordt nog vaak te licht gedacht over projectmanagement.
  3. Organisaties leggen de focus van hun projectmanagement vaak meer op verantwoording dan op sturing.

Project Quality Assurance wordt dikwijls gezien als een effectief en efficiënt middel om de slagingskans van een project te vergroten. Organisaties met een hoge succesgraad voor projecten rapporteren vaker de periodieke inzet van QA. Bijna de helft van de organisaties maakt hier gebruik van. Voor de organisaties die QA nog niet structureel inzetten, ligt hier een kans. Immers, door QA niet alleen achteraf maar ook vooraf en tijdens projecten in te zetten, kan men de bijdrage van projecten aan de realisatie van de strategie vergroten. Ondanks dat bijna de helft van de organisaties gebruikmaakt van QA, lijkt de slagingskans achter te blijven.

Een mogelijke oorzaak is dat in de meeste organisaties deliverables vaak in het licht van een procesaudit worden beoordeeld (‘is document aanwezig, ja/nee?’) en in mindere mate op hun inhoud. Daarnaast zien we ook dat de kwaliteitswaarborging plaatsvindt binnen het Project Management Office (PMO), dat zelden onafhankelijk is. Ook zien we dat PMO’s steeds professioneler worden. Hierdoor kan het zijn dat bepaalde project controls effectief blijken te zijn maar dat de waarde hiervan laag is. Dit wordt onderschreven door twee voorbeelden uit de praktijk:

  1. Er is een risk log aanwezig, maar als je de risicoverantwoordelijke binnen het PMO spreekt en vraagt naar de belangrijkste risico’s, zal hij antwoorden dat hij alleen verantwoordelijk is voor de risk log en niet voor de interpretatie ervan.
  2. De configuratie van het grootboek lijkt goed te zijn voor de paar testen die zijn gepland, terwijl de overige grootboekaccounts niet allemaal zo zijn geconfigureerd als in de documentatie is ontworpen.

In beide situaties lijken de project controls te werken maar de effectiviteit hiervan valt te betwijfelen: operatie geslaagd, patiënt overleden.

De bouwkundig adviseur

Project Quality Assurance kan worden vergeleken met de inzet van een bouwkundig adviseur bij het bouwen van een huis. Wie kent het niet: er is een offerte gemaakt met een aannemer om een huis te laten verbouwen, maar het eindresultaat is niet zoals verwacht; de aannemer is er echter van overtuigd dat wat hij opgeleverd heeft, voldoet aan de overeengekomen specificaties. Er is een conflict, maar wie heeft er gelijk? Aan een dergelijk conflict kunnen verschillende oorzaken ten grondslag liggen:

  1. Is de offerte duidelijk genoeg, bevat deze de juiste algemene voorwaarden, garanties, voldoende gedetailleerde specificaties, boeteclausules et cetera?
  2. Zijn de materialen die zijn gebruikt conform het bestek? Is de bouw gefocust op de korte termijn of heeft de aannemer oog gehad voor het feit dat het gebouw minimaal dertig jaar mee moet gaan?
  3. Had de opdrachtgever niet beter moeten controleren voor de oplevering, of misschien wel tijdens de oplevering? Door elke week even langs te lopen had hij de aannemer tijdig kunnen wijzen op zaken die niet in orde waren.
  4. Opdrachtgever en opdrachtnemer hebben een tegenstrijdig belang: de opdrachtgever wil zo veel mogelijk waar voor zijn geld, de opdrachtnemer wil een zo hoog mogelijke marge.

De opdrachtgever is vaak geen materiedeskundige en is daardoor meestal niet in staat dit soort diepgaande onderzoeken te doen. Hij beoordeeld de opdracht op basis van zijn eigen context, achtergrond en ervaring. Daarnaast verbouwt een mens één, hooguit twee keer in zijn leven een huis, dus hoe kun je dan weten wat de valkuilen zijn en daar proactief op sturen? In veel gevallen is het dus verstandig een bouwkundig adviseur in te huren om de kwaliteit van de bouw te verbeteren. Een bouwkundig adviseur inzetten voor ERP-projecten gebeurt door het inzetten van een Project Quality Assurance-rol binnen een programma. Dit wordt vaak uitgevoerd door een hiertoe opgeleide en gecertificeerde (IT-)auditor.

De toegevoegde waarde van een bouwkundig adviseur hangt sterk af van het contract dat afgesloten is:

  • Zo’n adviseur kan af en toe eens op de bouwplaats rondlopen en hier en daar aangeven dat zaken niet conform de standaarden zijn. In dit geval stelt de adviseur bijvoorbeeld vast dat de kozijnen netjes zijn geplaatst en geschilderd, maar zegt niet of deze op de juiste plaats staan en of de materiaalkeuze conform het bestek is. De toegevoegde waarde is dan relatief laag.
  • Hij voegt meer waarde toe als hij niet alleen kijkt naar wat er is gebouwd, maar ook naar de vraag of datgene wat is gebouwd, wel is afgesproken (juiste kwaliteit hout), of er niets is vergeten (bijvoorbeeld internetaansluitingen), of er iets anders is opgeleverd dan is afgesproken (rieten dak in plaats van dakpannen) en of het toekomstbestendig is (op punten voldoet de elektra niet aan de daarvoor bestaande certificering). Dit soort zaken kunnen alleen worden geïdentificeerd als de bouwkundig adviseur voldoende tijd heeft om grondig door bestek en bouwplannen heen te lopen en een externe specialist kan inschakelen als hij de kennis zelf niet heeft of twijfelt.

De timing van het inhuren van deze bouwkundig adviseur is ook essentieel. Hoe eerder in het proces de adviseur meekijkt, des te eerder er kan worden bijgestuurd en hoe groter de kans dat grote kostenposten in de toekomst kunnen worden voorkomen.

Een bouwkundig adviseur huurt specialisten in om de kwaliteit van het bouwwerk te beoordelen (wist u bijvoorbeeld dat er een NEN 1010-certificering is voor de kwaliteit van de elektra?). Een bouwkundig adviseur zal altijd een elektricien inhuren om het bouwwerk volgens deze NEN-certificering te laten toetsen.

Dit voorbeeld is een analogie met ICT-projecten. Zoals eerder aangegeven neemt de slagingskans toe als een bouwkundig expert wordt ingehuurd. Deze slagingskans wordt nog groter als de bouwkundig adviseur niet alleen oppervlakkig meekijkt maar feitelijk vaststelt wat de kwaliteit van de bouw is, gebaseerd op afspraken die de opdrachtgever met de aannemer heeft gemaakt. Hoe eerder deze expert wordt ingehuurd, des te lager de herstelkosten zijn. Door de benodigde detailkennis zal de bouwkundig adviseur dit niet alleen kunnen en zal hij specialisten moeten inhuren.

Fact-based Project Quality Assurance

Impliciet spreekt het voor zich dat een project een hoger risico met zich meebrengt dan een reguliere lijnactiviteit. Een project wordt begrensd door tijd en budget en is mede afhankelijk van goede multidisciplinaire samenwerking. Daarnaast geldt, ondanks de aanwezigheid van een projectplan, dat de afgelegde weg vaak anders is dan op voorhand is beschreven. Het zal voor u als lezer geen verrassing zijn dat met name grotere (ERP-gerelateerde) projecten dikwijls niet binnen – of zelfs ver buiten – planning en budget worden opgeleverd. Ook het resultaat wijkt vaak af van de vooraf gestelde doelstellingen of verwachtingen. Onderzoek bevestigt dit ([Koza07]). Grote ERP-implementaties lopen veel meer kans om niet te slagen dan mensen verwachten. In de laatste decennia blijkt 25 procent van de projecten te slagen en 25 procent te mislukken. Voor de overige 50 procent van de projecten wordt het beoogde resultaat deels gehaald. Dit wordt bevestigt door de Standish Group, die vanaf 1994 hier onderzoek naar heeft gedaan ([Stan94], [Stan13], [Stan14]). Projecten met een omvang kleiner dan een miljoen euro kennen een succespercentage van 76 procent, terwijl boven de miljoen dit percentage daalt naar 10 procent ([Stan13]). KPMG heeft in 2012 een groot onderzoek gedaan naar projecten en programma’s ([Donk12]). Een van de vijf conclusies was dat organisaties die een hoge succesgraad voor projecten rapporteerden, aangaven Project Quality Assurance periodiek in te zetten ([Donk12]).

Er is de laatste jaren al veel geschreven over Project Quality Assurance ([Vale12], [Meul01]). Hierbij wordt generiek gesteld dat Project Quality Assurance tot doel heeft om:

  1. te bewaken dat er geen goal reduction plaatsvindt;
  2. te bewaken dat het eindresultaat voldoende kwaliteit heeft;
  3. te bewaken dat het juiste niveau van interne beheersing wordt bereikt;
  4. onafhankelijk te adviseren aan de projectverantwoordelijken.

Hierna wordt beschreven hoe deze doelen feitelijk kunnen worden getoetst.

1 Bewaken dat er geen goal reduction plaatsvindt

Als de tijdlijnen van het project in gevaar komen, is het aanpassen van de scope een van de mogelijkheden om binnen de gestelde tijdlijnen live te gaan. Normaal gezien gebeurt dit door middel van een changeproces en zal bij een zekere grootte akkoord gevraagd moeten worden van de stuurgroep opdat belanghebbenden op de hoogte zijn en deze scopewijziging goedkeuren.

In de praktijk gebeurt dit vaak niet en probeert de projectmanager of de leverancier (system integrator) te beknibbelen op de kwaliteit en/of de scope van de implementatie. Vaak zijn deze wijzigingen niet zichtbaar in gangbare controles omdat dikwijls de scope van de testscripts wordt aangepast aan de aangepaste scope. Door een goede lijncontrole uit te voeren kan worden vastgesteld of de overeengekomen scope daadwerkelijk kwalitatief wordt opgeleverd. Een handig begin is om de processtappen te nummeren; ook elke andere kapstok kan worden gebruikt. De kunst is nu om voor elk van deze genummerde stappen vast te stellen of deze functionaliteit is beschreven (is er bijvoorbeeld een procesbeschrijving aanwezig?), of er functionele en technische documentatie is, en of deze functionaliteit ook daadwerkelijk is geconfigureerd, (positief en negatief) is getest (via unit-, integratie- en acceptatietesten) en is getraind aan de eindgebruikers. Een handig hulpmiddel hiervoor is de template in figuur 1; stel per cel vast of en waar deze controle aanwezig is.

C-2016-1-Keuning2-01-klein

Figuur 1. Template voor het toetsen van volledigheid. [Klik op de afbeelding voor een grotere afbeelding]

Als de auditor de volledigheid van de oplevering heeft vastgesteld, is de vervolgstap om vast te stellen of de kwaliteit van voldoende niveau is.

2 Bewaken dat het eindresultaat voldoende kwaliteit heeft

In dit artikel worden drie voorbeelden genoemd (niet limitatief) om feitelijk vast te stellen dat de oplossing van voldoende kwaliteit is.

A Kwaliteit van het testproces vaststellen

In de praktijk zien we vaak dat het testen van opzet, bestaan en werking van de project controls rondom het testen niet representatief is voor de daadwerkelijk uitgevoerde testactiviteiten. Hoe vaak gebeurt het niet dat heel ambitieus wordt bedacht dat integratietesten met eindgebruikersrollen gedaan moeten worden, dat de voor de belangrijkste landen essentiële stromen binnen het systeem (inclusief interfaces en master data) getest moeten worden en dat deze testscripts ook afgetekend moeten worden door een eindgebruiker of key user. De (IT-)auditor beoordeelt vervolgens of de testscripts de scope afdekken, of er eindgebruikersprofielen worden genoemd in de testscripts, of de testscripts van voldoende kwaliteit zijn en of de testscripts zijn afgetekend (en issues zijn gelogd). Maar wat zegt dit over de daadwerkelijke testactiviteiten binnen het systeem? Oké, er staat een handtekening op het testscript, dus in principe kun je zeggen dat de eigenaar akkoord is. Maar is dit voldoende?

Het uiteindelijke doel van testen is om vast te stellen dat het systeem juist en volledig is geconfigureerd. Deze gegevens zijn allemaal in het testsysteem aanwezig. Door een download te maken van de relevante tabellen en deze vervolgens te analyseren kun je de projectmanager en stuurgroep feitelijk en gegevensgericht inzicht geven in de effectiviteit van het testen:

  • Hoeveel testen zijn er uitgevoerd in de unit-, integratie- of acceptatietesten?
  • Zijn hierbij alle inkoop-, verkoop- en andere relevante stromen getest, voor alle bedrijfsonderdelen en voor alle landen (en voor welke niet)?
  • Zijn binnen deze stromen alle uitzonderingen getest?
  • Wie heeft deze testen uitgevoerd? De key user, de eindgebruiker of toch de programmeur omdat het project in tijdsnood zat?

Onze ervaring leert dat we vaak de volgende conclusies kunnen trekken:

  • Het testen is kwalitatief minder uitgevoerd dan het afgetekende testscript aangeeft: testen zijn voor een (beperkte) subset van de scope uitgevoerd.
  • Ondanks dat de testscripts aangeven dat er moet worden getest met eindgebruikersrollen, blijkt vaak dat de testen daadwerkelijk door IT- of projectmedewerkers zijn uitgevoerd.
B Kwaliteit van de softwarecode vaststellen

In 2013 geeft [Amor13] al aan dat ondanks dat er veel vooruitgang is geboekt met gestandaardiseerde projectaanpakken zoals Prince2 en MSP, in de praktijk het softwarekwaliteitsaspect vaak onderbelicht blijft als er een te sterke sturing plaatsvindt op tijd en budget. Daarnaast geeft [Amor13] aan dat – als er al naar wordt gekeken – dit vooral vanuit een procesmatig oogpunt plaatsvindt, de feitelijke invulling van de kwaliteitstesten wordt niet verder uitgewerkt. Hierdoor ontstaan er volgens [Amor13] twee risico’s:

  1. Wanneer het einde van het project wordt getest, worden fouten vaak te laat in het proces gevonden, waardoor de impact op de einddatum groot is.
  2. Het tweede risico is subtieler en wordt ‘technische schuld’ genoemd. Dit betekent dat de software ogenschijnlijk goed werkt, maar onder de motorkap niet op een optimale manier is gerealiseerd. Dat leidt vaak op ongewenste momenten tot problemen.

Dat het schrijven van kwalitatief goede software geen sinecure is en dat het probleem van kwalitatief niet toereikende software nog steeds actueel is, bewijzen programma’s in de media die aantonen dat het niet evident is dat de opgeleverde software aan de eerder gestelde kwaliteitseisen voldoet ([Zemb15]).

Een softwarespecialist kan een aantal feitelijke waarnemingen doen. Ten eerste is er software beschikbaar die de kwaliteit en onderhoudbaarheid van de code feitelijk kan vaststellen. Daarnaast kan deze specialist vaststellen of de opgeleverde documentatie volledig en van voldoende niveau is. Tot slot kan deze specialist vaststellen of de softwarecode voldoet aan internationale standaarden en modulair is opgebouwd, waardoor de software niet alleen schaalbaar is maar ook snel in onderhoud genomen kan worden.

C Readiness van de project-, IT- en gebruikersorganisatie

Een snel en effectief hulpmiddel om de readiness van de project-, IT- en gebruikersorganisatie te achterhalen is een enquête. Door een enquête uit te sturen naar de verschillende stakeholders binnen een programma heb je als auditor op een relatief snelle manier een compleet en feitelijk beeld van verschillende deelaspecten binnen het programma. Deze enquête kan ook worden gebruikt als scopinginstrument of als risicoanalyse om gefocust een project te beoordelen. Daarnaast kunnen de uitkomsten van de enquête bijvoorbeeld worden gebruikt als input in de te voeren gesprekken, als toetsingsinstrument voor de (IT-)auditor of om cultuur en gedrag binnen het project te beoordelen (zie ook [Bekk16] in deze Compact).

C-2016-1-Keuning2-02-klein

Figuur 2. Voorbeeldresultaat van een enquête op selectie van deelgebieden. [Klik op de afbeelding voor een grotere afbeelding]

In figuur 2 is een gedeelte van een bestaande enquête te weergegeven. Hier is duidelijk te zien dat binnen dit project de deelgebieden ‘testen’ en ‘interne controle’ verhoogde aandacht behoeven.

KPMG maakt gebruik van een geautomatiseerde vragenlijst die binnen een audit kan worden gebruikt. De klant kan ook gebruikmaken van deze vragenlijst en enquête. De database bestaat uit een paar honderd vragen op de gebieden project governance, project management, change management, performance management, mensen, technologie en processen en kan worden toegespitst op het onderzoek dat moet worden uitgevoerd. Daarnaast is deze vragenlijst geschikt voor traditionele ‘waterfall’-en Scrum-projecten. Een specialistische module kan worden gebruikt om cultuur en gedrag binnen een project te beoordelen.

Let bij een enquête goed op de communicatie en de omvang van de vragenlijst. Het is aan te bevelen om de projectmanager als sponsor van deze enquête te laten optreden en indien nodig de resultaten anoniem te behandelen om de eerlijkste respons te krijgen. Ook moet de vragenlijst niet te lang zijn en moet deze voor alle stakeholders te begrijpen zijn.

3 Bewaken dat het juiste niveau van interne beheersing wordt bereikt

Vaak constateren wij dat een system integrator het configureren van beheersmaatregelen en logische toegangsbeveiliging niet als hoogste prioriteit binnen een implementatie ziet. Het heeft dan ook de voorkeur om als (IT-)auditor feitelijk te onderzoeken of de maatregelen effectief zijn ontworpen en of deze ook daadwerkelijk conform zijn geconfigureerd.

Er zijn verschillende aspecten waarop kan worden getoetst:

  • Applicatiecontroles. Applicatiecontroles zijn beheersmaatregelen die binnen een (ERP-)systeem worden geconfigureerd. De vraag die een IT-auditor zichzelf stelt is: zijn de systeemtechnische controlemaatregelen effectief geconfigureerd? Om deze vraag te beantwoorden kan de auditor een SAP-specialist samen met een interne-controle- (of GRC-) specialist een feitelijk onderzoek laten doen om de juiste configuratie van de key controls vast te stellen. Voorbeelden van vragen die dan worden onderzocht zijn: Is het grootboek geconfigureerd zoals ontworpen? Zijn de juiste rekeningen geblokkeerd voor handmatige boekingen? Zijn de juiste three-way-match-toleranties geconfigureerd? Zijn de verplichte velden juist geconfigureerd? Is het autorisatie- en procuratieregister juist en volledig in het ERP-systeem ingebouwd?
  • Logische toegangsbeveiliging en ongewenste functievermengingen. Er zijn in de markt verschillende tools beschikbaar voor het analyseren van de opzet van de logische toegangsbeveiliging en de functiescheidingen binnen een (ERP-)systeem. Hierdoor krijg je feitelijk inzicht in de kwaliteit van de ingerichte functiescheidingen (zoals in figuur 3).

C-2016-1-Keuning2-03

Figuur 3. Overzicht van ingerichte ongewenste functievermenging.

4 Onafhankelijk adviseren aan de projectverantwoordelijken

Het resultaat van een Project Quality Assurance-onderzoek is om de belanghebbende (zoals projectleiding, projectsponsor, stuurgroep en raad van commissarissen) onafhankelijk te kunnen informeren over de kwaliteit van het programma op de voorgenoemde aspecten.

Naast een inhoudelijke beoordeling van de kwaliteit van de project deliverables is het ook mogelijk een feitelijke beoordeling te doen van de project controls zelf. Laten we als voorbeeld het issue-managementproces nemen binnen een project. Het doel van issue management is om gecontroleerd projectissues te registreren, op te lossen en te hertesten. Hierbij kan het helpen om een aantal feitelijke analyses uit te voeren. Natuurlijk rapporteert de projectmanager zelf ook op (een subset van) deze KPI’s. Echter, een onafhankelijke bevestiging, of een diepere analyse van de uitzonderingen en gestructureerd inzicht hierin worden als toegevoegde waarde gezien.

Op het issue-registratiesysteem, het issueregister of het ticketsysteem zouden de volgende analyses kunnen worden uitgevoerd:

  • Testvoortgang per domein. Probeer inzichtelijk te krijgen wat de huidige voortgang is van het testproces. Welke testen zijn klaar, afgekeurd of staan nog open om te testen?
  • Hoeveel issues (defects) zijn er gelogd op het programma? Wat is de prioriteit van deze issues?
  • Wat is de gemiddelde oplostijd van issues, en kijkend naar de trend, hoeveel issues staan al meer dan een aantal dagen open? Wat is hiervan het voorspellende karakter? Welke risico’s heeft dit voor de go-live?
  • Work-arounds zijn vaak tijdelijke oplossingen die worden bedacht omdat er onvoldoende tijd beschikbaar is om voor go-live het gewenste resultaat te behalen. Deze work-arounds worden veel gebruikt om kritische issues tijdelijk op te lossen. Dikwijls houdt een projectteam een lijst bij van work-arounds. Even zo vaak is deze lijst van work-around niet aanwezig. In het laatste geval probeer je feitelijk het aantal work-arounds te achterhalen in gesprekken met stakeholders. Analyseer bijvoorbeeld ook eens de issues die gewijzigd zijn van issues met prioriteit 1 naar issues met prioriteit 2. Vaak komt bij deze wijziging een work-around kijken en daarbij is het de vraag of de stakeholders hiervan op de hoogte zijn.

C-2016-1-Keuning2-04-klein

Figuur 4. Voorbeeld van een feitelijke analyse van een ticketsysteem. [Klik op de afbeelding voor een grotere afbeelding]

Conclusie

Zoals in de inleiding aangegeven heeft het uitvoeren van een Project Quality Assurance-onderzoek een positieve impact op het slagen van een project of programma. Toch zie je dat met name projecten groter dan 1 miljoen euro nog steeds vaak niet succesvol zijn, ondanks het PMO dat hiervoor is ingeregeld. In dit artikel is betoogd dat de kwaliteit van een Project Quality Assurance-onderzoek wordt vergroot door feitelijk naar de kwaliteit van project deliverables te kijken om zo naast het PMO onafhankelijke inzichten te kunnen geven in het project. Dit onderzoek kan eenmalig, periodiek of continu (als onderdeel van het programma) plaatsvinden. De voorbeelden die zijn genoemd maken duidelijk dat de IT-auditor dit niet alleen kan. Temeer omdat er naast de genoemde vier soorten aspecten op veel meer gebieden binnen een programma feitelijk onderzoek kan plaatsvinden om vast te stellen hoe het project er écht voor staat. De operatie is dan geslaagd en de patiënt is hopelijk niet overleden. De conclusie is dan al snel getrokken dat een IT-auditor als bouwkundig adviseur dit niet alleen kan.

Literatuur

[Amor13] J.M. Amoraal, G. Lanzani, P. Kuiters en J.M.A. Koedijk, Grip op de kwaliteit van software, Compact 2013/2.

[Bekk16] E. van Bekkum en M.G. Keuning, Gedrag: kritische factor voor succesvolle IT-projecten, Compact 2016-1.

[Donk12] H. Donkers, Project- en programmamanagement survey, KPMG, 2012.

[Koza07] M. Kozak-Holland, What Determines a Project Success or Failure, Lessons From History, 2007.

[Meul01] A.M. Meuldijk en M.A.P. op het Veld, Betere beheersing van ERP-projecten door Quality Assurance, Compact 2001/6.

[Stan94] Standish Group, The Chaos Report, 1994.

[Stan13], Standish Group, Chaos Manifesto 2013: Think Big, Act Small, 2013.

[Stan14] Standish Group, The Chaos Report, 2014.

[Vale12] R. Vale, M. Temme, P.P. Brouwers en S. van den Biggelaar, Consolideren of Excelleren, 2012.

[Zemb15] Zembla, De spaghetticode, 2015, http://zembla.vara.nl/seizoenen/2015/afleveringen/02-09-2015

Being in Control with Agile

Agile software development processes produce interesting numbers on productivity and quality of software development projects that indicate an increasing value to organizations. Meanwhile, management teams struggle to control this type of approach, as the process deviates significantly from traditional software development methods and as such does not enable the same type of control. This article explores the risks companies employing an agile approach are facing and how these can be addressed, including some suggestions on the auditability of this type of process.

Introduction

“While IT projects are becoming larger, on average, large IT projects tend to run 45% over budget, 7% over time, and deliver 56% less value than predicted, with software projects experiencing the highest risk of cost and schedule overruns.” ([Bloc12])

Speed of technology, increased pressure on compliancy with expanding legal and regulatory requirements, as well as fast changing market conditions are driving change in organizational processes. In an effort to minimize the risk in software development and to be able to cope with the changing organizational needs, agile development is chosen over traditional plan-based methods by more and more organizations. Launched in 2001 with the Agile Manifesto, it is now one of the major implementation methods which organizations turn to in order to speed up development and illustrate the value of their projects ([Gart15]). This article will address the major differences between plan-based and agile development methods, the challenges these differences pose in terms of control and auditability as well as some suggestions for addressing these challenges.

Plan-based vs. Agile Software Development Methods

Plan-based software development has been the predominant software development method for a long time. It moves through four broadly defined phases (see Figure 1), where none of the phases can start before the previous phase has been completed. This can lead to a long trajectory for any project using this approach, implying a number of risks as well. Two of these risks were articulated by ([Cape94]): the insufficient or unclear definitions of requirements for the end product, and changes to the scope or the requirements during the project. Requirements are set long before the actual build and testing of the product takes place, let alone before end users start making use of a product in the production environment.

C-2016-1-Hattink-01

Figure 1. Software Development Process.

The increased project duration is no longer in line with the business demands of software development. The market is intense, customer satisfaction is high on the agenda and businesses are subject to a continuously changing environment. It is not sufficient to make use of a structured but rigid process in software development anymore, as the circumstances require a much more dynamic process ([Yu14]).

Agile software development can deliver on the need for a dynamic process. Through multiple iterations a small piece of the software is developed, making it possible to regularly re-evaluate the requirements that the customer sets for the software. When requirements change, this can be included in the development trajectory and also in the software.

Many forms of agile software development exist, of which ‘scrum’ may be one of the most well-known. Named after a scrum in a rugby match, the scrum methodology of software development is “an iterative, incremental framework for projects and product or application development” ([Suth10]). A recent study by Rally ([Rall13]) included almost 10,000 teams and showed that productivity was doubled when people are dedicated to one team, and teams using full scrum have 3.5 times better quality in terms of the end-product (measured in number of released defects).

Table 1 illustrates the main differences between the two methods.

C-2016-1-Hattink-T01

Table 1. Traditional vs. agile development ([Neru05], [Dyba08]).

Many organizations and circumstances require that a mix of these methods is employed. Within larger organizations a full-blown agile approach, relying on small teams, high levels of autonomy and collaboration, may not be the right fit. Some aspects of plan-based methods may be employed in order to ensure a fit with the organization.

Risk Factors in Agile Development

While agile methods have many benefits, they also carry certain risks, many inherent to the agile process (see Table 2).

C-2016-1-Hattink-T02

Table 2. Features of agile and the challenges they pose.

Team Autonomy

To enable speed and flexibility, the teams in an agile environment work with a high degree of team autonomy. Without highly skilled resources, teams may spiral out of control and lose sight of business objectives. There are four capabilities teams need to possess in order to make an agile project a success: technical (technical expertise), behavioral (interpersonal relationship skills), business (understanding organization context), and infrastructure capability (providing firm-wide IT infrastructure services) ([Goh13]). Agile requires resources to be fully committed to the team. Specifically, large projects often encounter issues trying to include and maintain the best resources in agile teams, as they often cannot be spared from their day-to-day activities.

Speed

Decisions need to be made quickly and by a small group of people, who may or may not have the right information to make a decision at that point in time. The quality of the process may be undermined by a lack of time for proper documentation and the quality of the product may be diminished by a lack of required testing efforts.

Control is hard to exercise when an entire development cycle can take only three weeks and then simply starts all over again. Deviations from plans are not uncommon, flexibility is key and collaboration crucial. Under intense time pressure the focus on communication becomes minimal, and collaboration may actually fail.

Flexibility

With agile software development, the requirements for the end product are less clearly defined at the beginning of the project and are likely to change. The benefits are obvious, as organizations are more flexible and better able to respond to rapidly changing market conditions and customer demands. However, it does mean that the end product will almost by definition deviate from what was proposed at the beginning of the project. Intensive customer involvement is required to steer the project in the right direction.

Short-term Focus

As teams work within very short timeframes and have the autonomy to make their own decisions, they tend to focus on tactical decision-making and run the risk of neglecting or even steering away from long-term (strategic) objectives ([Drur12]). As customers are crucial in agile development and are encouraged or even required to be part of the agile team, the business value on an operational level should be guaranteed. The business value to the organization at large, on a strategic level, may not be evident when the product is delivered.

Minimal Documentation

While it is a myth that agile makes all documentation requirements obsolete, it is fair to state that documentation is minimized as much as possible to ensure that the administrative burden is low and efficiency is high. The aim should be to minimize documentation to a level where security and quality can still be guaranteed and control can be exercised. Also, continuity of agile processes needs to be ensured, even if one or more resources need to be replaced. Knowledge management should be optimized and learning enabled, so that knowledge is no longer tacit, residing within the teams.

Without the proper documentation of source code and system functionality it can be a challenge to maintain it in a proper manner, even if the product was delivered meeting high quality and security standards.

Small Teams

Small teams with high levels of autonomy are prone to overestimate their capabilities and underestimate complexities. Tushman and O’Reilly ([Tush96]) found that a high level of team autonomy is also very likely to lead to a higher risk appetite and more experimenting.

From a more practical point of view, it is very hard to have independent testers when there is a small team and a developer is forced to test his or her own work.

Addressing Risks and Exercising Control

When it comes to addressing these risks and exercising control, the challenges in auditability of these projects are also highlighted. As agile is still relatively new and relies on autonomy of a small group of team members, controlling the process can be a challenge for management. There are certain aspects that can be included in an internal review on the effectiveness of the agile process.

There are different forms of control that can be exercised: behavioral control (e.g. management monitoring), output control (e.g. defined output requirements), clan control (e.g. social norms within teams), and self-control (e.g. individual empowerment) ([Kirs96]). While self-control would not be a logical choice in an environment where team collaboration and autonomy are crucial, a combination of the other types of control seems appropriate.

Output Control

Agile development aims to implement user requirements, and makes that the lead indicator in the process. This means that planning is not on a task-level, but on a feature-level ([Rubi11]).

To be able to exercise control over the agile process, there should be less focus on cost and schedule estimates, but more on the features that are created, the requirements that are defined during the process, and the accomplishment of those requirements.

C-2016-1-Hattink-02

Figure 2. Estimates & Constraints ([Slig06]).

While documentation is minimal, there are still a number of deliverables (or artefacts) in the agile process that can be reviewed. The product vision statement paints the first complete picture of the product to-be. The overall design can also be included in the vision statement. Without detailing the technical features extensively, the components of the solution can be described to clarify the integration and possible challenges in integrating all releases of the solution as well. The product roadmap can specify the plan a bit further by defining themes that the team will be working on. Instead of use-cases that define a specific set of interactions leading to a desired result, user stories are defined. These are defined from the perspective and in the language of the customer, focused on the desired result without exactly defining the details on how to achieve this result. The release plan will define the schedule for iterations, while specific user stories are detailed per iteration. The product backlog keeps the overview of the total number of user stories defined that still need to be completed in an iteration to come. Combined with the (overall) burn-down chart this provides a complete overview of progress. To ensure quality criteria are being met and clear ‘acceptance criteria’ are defined and adhered to during the project, the definition of ready and definition of done can be specified upfront, detailing respectively a set of criteria for adding a user story to an iteration and a set of criteria for accepting and deploying a piece of software. The definition of done should include:

  1. Successful testing and a demo of the functionality, ensuring the quality of the product as well as its alignment with customer requirements;
  2. Business acceptance, ensuring the user story is addressed and the delivered software fits the desired scope of the project;
  3. Documentation, while limited, should be present to ensure the technical design is documented and to enable hand over to a support team that will continue to maintain the software after implementation;
  4. Authorizations should be defined and incorporated in the design of the software. Access in the system should be controlled. During development, there should be segregation of duties between developer and tester to ensure independence and avoid ‘tunnel vision’.

There are also a number of deliverables specific to a phase in the iteration. In the iteration planning phase the work for the upcoming iteration is laid out. This can be done through the creation of a sprint backlog, where the user stories are divided in story points and assigned to a specific sprint where they should be completed. During the iteration execution the sprint backlog shows which story points still need to be completed in that sprint. Progress can also be shown through a sprint burn-down chart, which also shows what has already been completed ([Rubi11]). To ensure everyone knows what they are doing the sprint backlog can be further broken down in specific tasks through a task board, which also aids in managing the workload. In the daily stand-up the workload can be discussed and rebalanced where necessary, this way problems can be recognized in time to resolve them. Once a piece of software is tested it can be shown to the customer through a demo, which provides the customer more insight to the progress being made and ensures that the expected quality is delivered. In the iteration review, the team and relevant stakeholders review whether the commitment made at the start of the iteration is fulfilled at the end. The pieces of software, also called increments, are delivered and need to be measured up to the definition of done. To ensure maintainability of the system, the source code and support documentation can be included in the iteration review. These can be included in the delivery of the increments and as such in the definition of done. The same is the case for security requirements. In the iteration retrospective the past iteration is reviewed to see whether there are any lessons learned or improvements that can be made. A possible measurement of the work done is the velocity (number of user stories completed in the iteration). Other (related) measurements are the number of stories completed (in combination with what is left in the sprint backlog) and the number of tests passed. If the deliverables for each iteration are also released for use in a production environment by the customer, the number of customer-reported defects may just be the most important measure as a direct indicator of quality as it is perceived by the customer.

Behavioral Control

One of the core aspects of agile methods is its reliance on social processes ([Maru09]), introducing a need for soft controls. To ensure that knowledge is transferred during the iterations, collaboration is optimal and people are working towards the same goals (goal congruence), it is important to have the right people, clearly communicate the goals and create a common purpose. Knowledge and social skills that are required should therefore be defined when selecting the members of an agile team. Story owners are assigned to the stories in the iteration planning phase to ensure that the time invested by the customer is spent efficiently and customers are knowledgeable of the stories assigned to them. During the iteration execution the software is both developed and tested. Therefore separated development and test environments are required as well as separation between tester and developer. This can be done either through authorizations in the application (landscape) or through procedural controls like a 4-eyes principle.

Clan Control

With a lack of specific task-related behaviors and a focus on teamwork, clan control should not be underestimated. Business and IT alignment needs to be ensured, as well as long-term vs. short-term objectives and individual and team goals. Goal congruence should be stimulated as much as possible through e.g. team incentives and rewards, their relation to organizational performance, or peer-reviews. During the iteration review, all increments produced in the iteration can be presented to the stakeholders enabling fast feedback and ensuring that the increments reflect the requirements of the users correctly. Additionally, a risk assessment can be incorporated into this stakeholder review and evaluate the compliancy with security requirements as well. Through the fast feedback, corrections or adjustments can be made before the product is finished, ensuring higher quality and customer satisfaction.

The types of control as well as the auditable aspects are summarized in Table 3.

C-2016-1-Hattink-T03-klein

Table 3. Control points in iterations and during the project. [Click on the image for a larger image]

Product Review

In addition to the above mentioned controls that can be implemented and reviewed during the agile process, there are also controls that enable auditability of the product itself. The basic controls that should be in place can be defined beforehand. These would follow the guidelines of the information security policy and contain directives in access management, incident and problem management and back-up and recovery. These should be in place for the first piece of working software and ready for review. Every subsequent piece of software should be developed following a strict change management process, which incorporates testing as described above and includes a risk assessment which should review the impact on existing controls as well. Security requirements should be incorporated in the definition of done and only software that complies with this definition should be brought into the production environment.

Effective Use of Agile Processes

In every project there is a need for control. Management goals are to maximize value and minimize risk. While agile software development can add tremendous value, it also has a lot of consequences for the environment it operates in. Organizations should therefore ask themselves: is agile the right choice for us?

Deciding whether agile is the right choice

Agile provides an organization with flexibility, deliverables that are available at great speed, success that can be celebrated within a shorter timeframe, higher quality, as well as increased employee morale and user satisfaction. So why not use it whenever you can? While it all sounds like a dream come true, an agile process still carries a significant risk of failure. Agile processes need to be aligned and require certain enablers to increase their likelihood of success.

Enablers

The enablers for agile can all be deduced from the Agile Manifesto as it was developed in 2001. They have been incorporated in a great graphic by Lynne Cazaly, as shown in Figure 3.

C-2016-1-Hattink-03

Figure 3. The Agile Manifesto illustrated ([Caza14]).

When starting to employ an agile approach, it will have a big impact on your existing IT organization and the business at large. An agile approach is centered on the customer, making the customer part of the process and the team. This means a cultural shift where IT needs to open up and invite the customer into their world. The customer needs to have a positive stance on IT and the value it can deliver, in addition to a willingness to invest time and resources in the process. Both developers as well as customers need to be fully dedicated to an agile team.

Apart from dedication, the organization needs to have an understanding of the continuous change it will encounter. A big part of agile is the frequent delivery of working pieces of software. While that enables successes to be celebrated often, it might also invoke resistance to the constant change people experience.

An agile team has great levels of autonomy and works under pressure, both in terms of time as well as the quality of the deliverables. To achieve success under those circumstances you will need a highly knowledgeable team, with the right technical as well as business knowledge, and motivated team members who are extremely collaborative. Constant communication is a must, as well as a geographically centralized team.

Apart from the social/cultural factors, the technical requirements should not be forgotten either. High knowledge levels are required to develop software with simple code, enabling fast creation as well as limited requirements for (complex) technical documentation. It also requires a system environment that enables fast, intensive testing. In addition to rigorous unit testing, integration testing should ensure that a new piece will not obliterate any previous work done and the system will function properly as a whole.

Business and IT alignment

Agile development will not suit all needs and will not be the answer to all failed plan-based projects. Organizations should critically evaluate whether their business needs require an agile approach and whether their organization is aligned with an agile approach.

Organizations acknowledge the benefits of both approaches more and more and recognize that choosing one of the approaches exclusively may not be the optimal solution. As the CEO of a large Canadian retailer indicates, he needs both approaches in his organization to address the need for a stable environment for maintenance and improvement of his legacy systems as well as a flexible, innovative environment to maintain his competitive position in the market. To be able to combine these approaches in one organization, he and many others are choosing a bimodal approach.

Bimodal operations in IT refer to a structure where two IT organizations exist within one overall organization. As [Gart14] defines it, it is “the practice of managing two separate, coherent modes of IT delivery within the same enterprise, one mode focused on stability and the other focused on agility”. Each mode has their own focus, way of working and requires a different set of enablers. These modes need to be in constant communication with each other as they will need to rely on each other’s work at times as well as be able to cooperate in the same (IT and business) environment. Table 4 illustrates the differences between the modes.

C-2016-1-Hattink-T04

Table 4. Bimodal IT = Mode 1 + Mode 2 ([Gart14]).

In order to successfully deliver IT solutions, organizations should first examine which mode is most in line with their business needs. It may be that making use of only one of the modes will deliver the highest value to the organization, but in many cases a combination of the two will be most effective. A bimodal approach, especially in larger organizations, is well worth considering.

Conclusion

Agile software development methods are a promising approach to address the fast-changing market conditions and customer demands. Organizations should be careful in employing these methods though, as they are not suited to every organization. A combination of agile and plan-based developments may be more beneficial. An upcoming movement in larger organizations is to employ two modes of IT, where one pillar makes use of plan-based approaches, and the other makes use of agile approaches. This enables both the need for a stable environment (Mode 1) and the need for flexibility and opportunity for innovation (Mode 2).

To stay in control over software development processes, the first requirement is to ensure the approach is in line with the organization’s processes, needs and culture.

When the choice is made to make use of an agile approach, the focus of control shifts from the process to the product, at a feature-level. The nature of an agile approach requires the incorporation of soft controls in any management approach or audit plan, as the process relies heavily on organizational culture, team autonomy, collaboration and knowledge of a limited number of individuals. As these control types shift when moving from plan-based to agile methods, the audit framework will need to be adjusted as well, focusing on different deliverables and distinguishing between overall project deliverables and goals, and iteration specific deliverables and goals. With some adjustments ‘auditing agile’ is however not necessarily a contradictio in terminis.

References

[Bloc12] M. Bloch, S. Blumberg and J. Laartz, Delivering large-scale IT projects on time, on budget and on value, McKinsey Quarterly, 2012.

[Cape94] T. Capers Jones, Assessment and Control of Software Risks. University of California: Yourdon Press, 1994.

[Caza14] L. Cazaly, The Visual Agile Manifesto, 2014, http://www.lynnecazaly.com.au/support/

[Drur12] M. Drury, K. Conboy and K. Power, Obstacles to decision making in agile software development teams, The Journal of Systems and Software, 85(6), 2012, pp. 1239-1254.

[Dyba08] T. Dyba & T. Dingsoyr, Empirical studies of agile software development: a systematic review, Information and Software Technology, 50, 2008, pp. 833-859.

[Gart14] Gartner, Bimodal IT: How to Be Digitally Agile Without Making a Mess, 2014, https://www.gartner.com/doc/2798217/bimodal-it-digitally-agile-making

[Gart15] Gartner, Kick-Start Bimodal IT by Launching Mode 2, 2015, http://www.gartner.com/technology/reprints.do?id=1-2OYIMDT&ct=151006&st=sb

[Goh13] J. Goh, S. Pan and M. Zuo, Developing the Agile IS Development Practices in Large-Scale IT Projects: The Trust-Mediated Organizational Controls and IT Project Team Capabilities Perspective, Journal of the Association for Information Systems, 14(12), 2013, pp. 722-756.

[Kirs96] L. Kirsch, The management of complex tasks in organizations: Controlling the systems development process, Organizational Science, 7(1), 1996, pp. 1-21.

[Maru09] L. Maruping, V. Venkatesh and R. Agarwal, A Control Theory Perspective on Agile Methodology Use and Changing User Requirements, Information Systems Research, 20(3), 2009, pp. 377-399.

[Neru05] S. Nerur, R. Mahapatra and G. Mangalaraj, Challenges of migrating to agile methodologies, Communications of the ACM, May 2005, pp. 72-78.

[Rall13] Rally Software Development, The impact of agile quantified, 2013.

[Rubi11] E. Rubin and H. Rubin, Supporting agile software development through active documentation, Requirements Engineering, 16(2), 2011, pp. 117-132.

[Slige06] M. Sliger, Relating PMBOK Practices to Agile Practices – Part 2 of 4, Agile Connection, 13 April 2006, http://www.agileconnection.com/article/relating-pmbok-practices-agile-practices-part-2-4

[Suth10] J. Sutherland, Scrum Handbook. Boston: Scrum Training Institute Press, 2010.

[Tush96] M. Tushman and C. O’Reilly, Ambidextrous Organizations: Managing Evolutionary and Revolutionary Change, California Management Review, 38(4), 1996, pp. 8-30.

[Yu14] X. Yu and S. Petter, Understanding agile software development practices using shared mental models theory, Information and Software Technology, 56(8), 2014, pp. 911-921.

Forensic Logging Requirements

Based on experience we know that fraud investigations in the financial industry are often hampered by the poor quality of logging by IT systems. Especially because fraudsters are using new techniques like APT’s (Advance Persistent Threats) and “anti-forensics” tooling. In general a forensic analysis of the logging should provide insight into who did what, where, when, how and with what result. In this article we share the bad as well as best practices we encountered with respect to logging and audit trails. Finally we propose a 6 steps approach on how to realize logging and audit trails that adequately facilitate forensic investigations. It will also help companies to strengthen their internal control environment and be better able to comply with the regulatory reporting requirements.

Introduction

The Association of Certified Fraud Examiners (ACFE) estimates that 14.3 percent of all internal fraud cases occur at financial institutions with an average loss of $258,000 per case ([Feig08]). Many of these frauds are committed via IT systems. For the investigation of these frauds it is important that the investigators can make use of the correct logging and audit trails. However in practice forensic investigators are often hampered by weak application logging and audit trails. Although implementation of an adequate logging solution sounds easy, it proves to be rather complex. Complicating factors are:

  • The complexity of IT. In general, a business process does not make use of just a single IT system. In practice several user applications are part of a process chain. And in addition many IT components and thus system software and hardware are involved. To mention a few: operating systems, databases, network components such as routers and firewalls, user access control systems etc. All of them (should) provide the right audit trail.
  • The sheer amount of data. The amount of data that is transferred and processed is still growing rapidly due to bigger data centers, faster processors, faster networks and new technologies like cloud platforms, Big Data ([Univ]) and Internet of Things ([Höll14]). On top of this, every IT device and application generates log files. However, there really are no standards for how these logs present their data. As a result, an investigator either has to learn what the log files are telling him or develop technologies to normalize these logs into some common and useable format.
  • Latest developments to wipe traces or circumvent logging and detection. Very old techniques used by hackers to frustrate forensic investigations are hard disk scrubbing and file wiping by overwriting areas of disk over and over again. Also the use of encryption (like Pretty Good Privacy) ([Geig06]) and physical destruction of hardware were commonly used. However, nowadays, specialized so called “anti-forensic tooling” is available to try to manipulate logging remotely. The number is steadily growing and the techniques are getting more and more sophisticated. An additional complication factor is the hacker technique of APT’s (Advanced Persistent Threats). In this way hackers spread their activities over a long period of time – a couple of months is not unusual – while making as little “noise” as possible. Aim of this technique is to stay under the radar screen and plan for a huge “hit and run” attack after the system is fully compromised.

In this article we will investigate to which logging requirements the IT systems in the financial industry must comply to produce an audit trail that is adequate to support forensic investigations.

Note: this article is based on a paper for the graduation assignment of the Executive Program “forensic accounting expert” at the Erasmus University of Rotterdam.

Definition of the Problem

Many organizations are confronted with illegitimate actions via IT systems, like data leakage or fraudulent money transfers. According to the ACFE (Association of Certified Fraud Examiners) organizations suffer an average of more than 52 incidents of insider fraud annually. In such cases it is important to have a sound audit trail. However, while many organizations maintain access logs most of these logs are insufficient due to the following 3 limitations ([Geig06]):

  • The logs are missing record and field-level data and focus solely on a given transaction. Most existing logs only contain information at the transaction level, such as: Which users accessed which transaction at what time? In these cases, critical information is still missing. Such as “Which specific records and fields did the user access?’ and “What did the user do with the data?”
  • Main existing systems fail to log read-only actions, leaving gaps in the records. Most existing logs only record update activities. This leaves critical information about what was viewed, queried or simply accessed out of the audit trail entirely. In these cases, there is often no record of the points in time that information was accessed without being changed. This information is extremely important for preventing and investigating information leakage and data theft. Another area where this absence of information reveals significant gaps is in demonstrating access to private or privileged information.
  • Logs represent an incomplete view of activities that is often “hidden” across multiple systems and difficult to correlate. Many logs are maintained in separate systems or applications that don’t “talk” to each other. This makes it difficult to find and correlate relevant information – or respond quickly to an audit request. This reality often aids the malicious insider in obscuring their activity.

Legacy systems that were developed a decade or two ago and even many newer systems were not designed for collecting detailed data access logs. Altering logging capabilities or introducing a logging mechanism to these applications frequently required the addition of a logging component to each online program. In a large enterprise, this can add up to tens of thousands of lines of code.

As a result most forensic investigations are hampered by the poor quality of the logging by IT systems. This is also evidenced by the input we received during our interviews with forensic investigators of three Dutch financial companies. Poor logging makes it very difficult for investigators to determine what actually happened and to trace transactions back to natural persons.

Next to a poor design of the logging, the quality of the log data can also be affected by the use of so-called “anti-forensic” ([Kedz]) tooling. Hackers make use of these tools when attempting to destroy any traces of their malicious actions.

Objective and Scope

The objective of this article is to investigate which logging requirements IT systems in the financial industry must comply with to produce and safeguard an audit trail that is adequate to support forensic investigations. This includes the legal requirements in the Netherlands.

The IT controls to prevent fraud are out of scope for this article.

Approach

For this article we have used the following approach:

  • Interviewing forensic staff of three Dutch financial institutions about the problems they encountered during special investigations with respect to the implemented logging and with respect to the company policies
  • Study of regulatory logging requirements for the financial industry in the Netherlands as prescribed in:
    • Wet Financieel Toezicht
    • Toetsingskader DNB 2014
  • Study of best practices and industry standards regarding logging specifics:
    • ISO 27001 Information security management systems (ISO = International Standardization Organization)
    • COBIT Assurance Guide 4.1 (COBIT=Control Objectives for Information and related Technology)
    • PCI Data Security Standard (PCI = Payment Card Industry)
    • ISF Standard of Good Practices (ISF = International Security Forum)
  • Analysis of logging
  • Drawing a conclusion

Regulations

Financial institutions in the Netherlands have to comply with the Financial Supervision Act (Wet op het Financieel Toezicht) and/or the Pension Act (Pensioenwet). Pursuant to these acts, the Dutch Central Bank (DNB) holds that financial institutions have to implement adequate procedures and measures to control IT risks. These risks relate to, among other things, the continuity of IT and the security of information. In this context, ‘adequate’ means proportionate: the measures and procedures should be in line with the nature of the financial institution and the complexity of its organizational structure. The procedures must be in conformity with generally accepted standards (good practices). Examples of such standards are Cobit ([ITGI07]) and ISO27000. These standards provide for measures which are, in principle, considered adequate by DNB. 

In order to test the security of information within financial institutions, DNB has developed an assessment framework consisting of a selection from Cobit. DNB developed this framework, which comprises 54 COBIT controls, in 2010. The framework was updated in 2014 and split into a “questionnaire” and a “points to consider” document.

The DNB assessment framework expects the financial companies to implement a logging and monitoring function to enable the early prevention and / or detection and subsequent timely reporting of unusual and/or abnormal activities that may need to be addressed. To this aim DNB refers to the following COBIT requirements ([DNB], “Points to Consider”):

  • Enquire whether and confirm that an inventory of all network devices, services and applications exists and that each component has been assigned a security risk rating.
  • Determine if security baselines exist for all IT utilized by the organization.
  • Determine if all organization-critical, higher-risk network assets are routinely monitored for security events.
  • Determine if the IT security management function has been integrated within the organization’s project management initiatives to ensure that security is considered in development, design and testing requirements, to minimize the risk of new or existing systems introducing security vulnerabilities

Some more detailed requirements are included regarding logging. But these requirements are not focused on applications (software) but more on the IT infrastructure level and in particular on components that are aimed at protecting against cybercrime ([Glen11]) (like firewalls ([Ches03])).

Company Policies

From a regulatory perspective it is not obliged to have a specific policy on logging. However the DNB Self-Assessment Framework ([DNB]) expects financial companies to develop a log retention policy. It is considered a best practice to include guidelines on logging in a Security Monitoring Policy. In addition it is advised to establish a Code of Conduct for internal staff and have them sign that they have read this code. The code should specify among others the actions that staff should refrain from like unethical behavior such as browsing non-business related data on the Internet. Staff should be made aware that violations will be logged and the log data can be used during special investigations.

From our interviews with the staff of 3 Dutch financial companies we learned that all companies have a Code of Conduct that is to be signed by the staff. In addition, logging requirements are addressed in internal policies but only at a very high level. In practice logging requirements are specified per application for most of the business critical applications and for internet facing applications. Data integrity protection requirements (like hashing to be able to detect unauthorized changes) are not specifically set for audit trails. In addition the policies do not prescribe that logging should be stored in a centralized location. In practice most logging for laptops and workstations is stored locally and this very much hampers the forensic investigations.

Best Practices and Industry Standards

For this article we have studied the following best practices and industry standards:

  • ISO 27001 Information security management systems (ISO = International Standardization Organization)
  • COBIT Assurance Guide 4.1 (COBIT=Control Objectives for Information and related Technology)
  • PCI Data Security Standard (PCI = Payment Card Industry )
  • ISF Standard of Good Practices (ISF = International Security Forum )

ISO and COBIT are well known industry standards as also evidenced by the fact that DNB is specifically referring to these standards (see previous paragraph). ISO 27001 Information security management system provides some basic input related to logging requirements in section A10.10 Monitoring. Although the standard specifies that audit logs should be produced and kept for an agreed period to assist in future investigations not many details are provided about such requirements.

Several sections in COBIT 4 address logging and audit trail related information and details. Compared to other examined standards, COBIT gives specific focus to audit trails with respect to IT management processes such as configuration management, change management and backup processes. In addition at a general level attention is paid to logging related to hardware and software components.

PCI ([PCI15]) is a standard for credit card transactions. In our view PCI contains a best practice for logging that is to be provided by individual applications because it specifies:

  • That the audit trails should link all access to system components to an individual user.
  • Which events should be logged related to system components.
  • Which details should be included in the event log.
  • Time synchronization requirements be able to compare log files from different systems and establish an exact sequence of events.
  • The need to secure audit trails so they cannot be altered.
  • The need to retain audit trails so that investigators have sufficient log history to better determine the length of time of a potential breach and the potential system(s) impacted.
  • Review requirements, since checking logs daily minimizes the amount of time and exposure of a potential breach.

However, the standard would further improve by explicitly requiring that:

  • the logging specifies which specific records and fields the user accessed and what the user did with the data. For example in the case of adjusting data, both the content before the change and after the change should be recorded.
  • the logging also includes information about read-only actions on highly confidential data.

Along with this, the ISF Standard ([ISF14]) of Good Practices provides a good addition to the PCI since it provides a more holistic view. ISF contains best practices about logging procedures as well as the kind of information systems for which security event logging should be performed. The ISF prescribes that security event logging should be performed on information systems that:

  1. are critical for the organization (e.g. financial databases, servers storing medical records or key network devices)
  2. have experienced a major information security incident
  3. are subject to legislative or regulatory mandates

In our view it would be best to subdivide “ad a)” in:

a1. are critical for the organization (e.g. financial databases or key network devices)

a2. contain privacy related data (e.g. medical records or other confidential customer and personal data)

Bad Practices

During the interviews with financial institutions the following bad practices / common mistakes were identified;

  1. Lack of time synchronization. It is crucial for forensic analysis in the event of a breach that the exact sequence of events can be established. Therefore time synchronization technology should be used to synchronize clocks on multiple systems. When clocks are not properly synchronized, it can be difficult, if not impossible, to compare log files from different systems and to establish the sequence of events. For a forensic investigator the accuracy and consistency of time across all systems and the time of each activity is critical in determining how the systems were compromised.
  2. Logging is not activated. For performance reasons the logging function is mostly deactivated by default. So unless the logging is specifically activated no events are logged. In case of Internet applications it has to be noted that both the web application and the web server logging are needed and have to be activated separately.
  3. Logging is only activated in production systems. For performance and cost cutting reasons logging is often not activated in development, test and acceptance environments. This enables hackers to attack these systems without being noticed. After successfully compromising one of these systems the hacker can launch an attack on the production system via the network. Or simply use retrieved userid/password combinations as they are often the same on production and non-production systems.
  4. Logging is activated but log data gets overwritten. Most of the time a maximum storage space capacity is defined for logging data. When this capacity is reached the system starts to write the next log data from the beginning of the file again. Thus the old log data is overwritten and gets lost if it has not been backed up in the meantime.
  5. Logging is activated but log data is over-detailed. Log data needs to be tuned. If no filtering is applied the log file is overloaded with irrelevant events. This hampers efficient analysis in case of forensic investigations.
  6. Application logging requirements are not defined. During development of applications the asset owner as well as the developers often overlook defining the log requirements.

Logging Requirements

A forensic analysis of the logging should provide insight into who did what, where, when, how and with what result. Therefore the logging should contain complete data and be kept for a sufficient period of time in a secured manner. After studying the logging requirements listed by the regulations and best practices used for this article (see appendices B to F), we have divided the logging requirements into 5 categories that are relevant from a forensic investigations point of view:

  1. Retention requirements
  2. Correlation requirements
  3. Content requirements
  4. Integrity requirements
  5. Review requirements

Ad a) Retention requirements

The logging should be kept for a sufficient period of time. With the latest attack techniques (like APT = advanced persistent threat) hackers stealthily break into systems. As a result organizations are often compromised for several months without detection. The logs must therefore be kept for a longer period of time than it takes an organization to detect an attack so they can accurately determine what occurred. We recommend retaining audit logs for at least one year, with a minimum of three months immediately available online.

Ad b) Correlation requirements

A forensic analysis of the logging of IT systems must result in a chronological sequence of events related to natural persons. It is therefore important that logged activities can be correlated to individual users as well as to activities logged on other systems. These activities include business transactions as well as actions to get read access to sensitive data. As a business process chain contains several applications, the logging needs to be arranged for every application where users can modify transaction data or view sensitive data.

Because different applications running on different platforms each log a part of a transaction process, it is required to have a proper time synchronization between all computers. This is to ensure that timestamps in logs are consistent. To facilitate correlations, logging is best provided in a standardized format (like the Common Event Expression – CEE standard[CEE™ is the Common Event Expression initiative being developed by a community representing the vendors, researchers and end users, and coordinated by MITRE. The primary goal of the effort is to standardize the representation and exchange of logs from electronic systems. See https://cee.mitre.org.]).

Ad c) Content requirements

The main requirement content wise is that the logging provides information for all relevant events on who did what, where, when, how and with what result. Both application as well as infrastructure logging should provide this insight.

The PCI standard gives a good reference for application logging, but the precise log content should be decided on basis of individual applications and follow a risk based approach. In many cases this will result in the requirement to also log read-only actions, something that is now often forgotten.

The DNB assessment framework provides a good reference for infrastructure logging, especially regarding network components that are aimed to protect against cybercrime (such as firewalls).

Ad d) Integrity requirements

An attacker will often attempt to edit the audit logs in order to hide his activity. In the event of a successful compromise the audit logs can be rendered useless as an investigation tool. Therefore logging should be protected to guarantee completeness, accuracy and integrity.

Ad e) Review requirements

Security-related event logs should be analyzed regularly to help identify anomalies. This analysis is best done using automated security information and event management (SIEM) tools or equivalent. The analysis should include:

  • processing of key security-related events (e.g. using techniques such as normalization, aggregation and correlation)
  • interpreting key security-related events (e.g. identification of unusual activity)
  • responding to key security-related events (e.g. passing the relevant event log details to an information security incident management team).

Analysis of Logging

Many breaches occur over days or months before being detected. Regularly checking logs minimizes the amount of time and exposure of potential breaches. If exceptions and anomalies identified during the log-review process are not investigated, the entity may be unaware of unauthorized and potentially malicious activities that are occurring within their own systems.

Therefore a daily review of security events is necessary to identify potential issues. This encompasses, for example, notifications or alerts that identify suspicious or anomalous activities. Especially in logs from critical system components and logs from systems that perform security functions, such as firewalls, Intrusion Detection Systems, Intrusion Prevention Systems, etc.

Logs for all other system components should also be periodically reviewed to identify indications of potential issues or attempts to gain access to sensitive systems via less-sensitive systems. The frequency of the reviews should be determined by an entity’s annual risk assessment.

Note that the determination of “security event” will vary for each organization and may include consideration for the type of technology, location and function of the device. Organizations may also wish to maintain a baseline of “normal” traffic to help identify anomalous behavior.

The log review process does not have to be manual. The use of log harvesting, parsing, and alerting tools can help facilitate the process by identifying log events that need to be reviewed. Many security event manager tools are available that can be used to analyze and correlate every event for the purpose of compliance and risk management. Such a tool sorts through millions of log records and correlates them to find the critical events. This can be used for a posteriori analysis such as forensic investigations. But also for real time analysis to produce dashboards, notifications and reports to a tempo prioritize security risks and compliance violations.

DNB recommends to deploy SIM/SEM (security incident management/security event management) or log analytic tools for log aggregation and consolidation from multiple machines and for log correlation and analysis. SIEM (Security Information and Event Management) is a term for software products and services combining both SIM and SEM.

SIEM software such as Splunk and Arcsight combine traditional security event monitoring with network intelligence, context correlation, anomaly detection, historical analysis tools and automated remediation. Both are a multi-level solution that can be used by network security analysts, system administrators and business users. But also by forensic investigators to effectively reconstruct many system and user activities on a computer system.

Conclusion and Recommendations

At first sight it seems relatively easy to arrange for adequate logging. However, going into more detail there is much more to it. Logging should provide information on who did what, where, when, how and with what result. Both application as well as infrastructure logging should provide this insight. The sheer amount of data, the complex IT systems, the large varieties in hardware, software and also logging formats themselves often hamper forensic investigations within financial institutions. It is therefore very difficult and time consuming to perform a forensic investigation on an IT system and populate a chronological sequence of events related to natural persons. Especially since in practice most logging does not contain all relevant data for forensic investigations.

The root cause for this is in our opinion twofold: lack of specific regulations and company policies and lack of priority for logging requirements during the development of new systems. In our view a best practice for achieving an adequate audit trail during system development would be:

  • Risk assessment. Perform a risk assessment as the first step of the development phase and make an inventory of the key operational risks (including the risk of fraud and data leakage).
  • Key controls. Define the key controls to mitigate the key risks to an acceptable level.
  • Event definition. Related to the key controls, define which events should be logged and which attributes (details) should be logged per event to prove that the controls are working effectively. Have the event definition and previous steps reviewed by staff of Internal Audit and/or Special Investigations.
  • Logging. Design the logging based on the defined events and attributes, taking into account the general logging requirements regarding retention, correlation, content, integrity and review (see “Logging Requirements”).
  • Implementation. Implement the system and perform a User Acceptance Test. This test should include testing the effectiveness of the key controls and the completeness of the logging.
  • Monitoring. Periodically monitor the logging to help identify suspicious or unauthorized activity. Because of the sheer amount of data this is best done with an automated tool (SIEM solution).

Such an approach would not only facilitate forensic investigations. It will also help companies to strengthen their internal control environment and be better able to comply with the regulatory reporting requirements.

References

[Ches03] W.R. Cheswick, S.M. Bellovin and A.D. Rubin, Firewalls and Internet Security: Repelling the Wily Hacker (2nd ed.), 2003.

[DNB] DNB Self Assessment Framework, http://www.toezicht.dnb.nl/binaries/50-230771.XLSX

[Feig08] N. Feig, Internal Fraud Still a Danger as Banks Adjust Strategy, 31 January 2008, http://www.banktech.com/internal-fraud-still-a-danger-as-banks-adjust-strategy/d/d-id/1291705?

[Geig06] M. Geiger, Counter-Forensic Tools: Analysis and Data Recovery, 2006, http://www.first.org/conference/2006/program/counter-forensic_tools__analysis_and_data_recovery.html

[Glen11] M. Glenny, DarkMarket: Cyberthieves, Cybercops and You. New York: Alfred A. Knopf, 2011.

[Höll14] J. Höller, V. Tsiatsis, C. Mulligan, S. Karnouskos, S. Avesand and D. Boyle, From Machine-to-Machine to the Internet of Things: Introduction to a New Age of Intelligence, Elsevier, 2014.

[ISF14] ISF, The Standard of Good Practice for Information Security, https://www.securityforum.org/tools/sogp/

[ITGI07] IT Governance Institute, COBIT 4.1 Excerpt, 2007, https://www.isaca.org/Knowledge-Center/cobit/Documents/COBIT4.pdf

[Kedz] M. Kedziora, Anti-Forensics, http://www.forensics-research.com/index.php/anti-forensics/#index

[PCI15] PCI Security Standards Council, PCI SSC Data Security Standards Overview, https://www.pcisecuritystandards.org/security_standards/documents.php?agreements=pcidss&association=pcidss

[Univ] University Alliance, What is Big Data?, Villanova University, http://www.villanovau.com/resources/bi/what-is-big-data/#.VSAwR_msXuQ

[Wiki15] Wikipedia, Security information and event management, 2015, http://en.wikipedia.org/wiki/Security_information_and_event_management

De nieuwe controleverklaring

Interview: Brigitte Beugelaar

Nu het AVA-seizoen voorbij is wordt het tijd om terug te blikken op de bijzonderheden van de jaarverslaggeving en controle 2014. Een noviteit was de toepassing van de nieuwe ondernemingspecifieke controleverklaring bij financiële instellingen. Naar aanleiding daarvan geeft Peti de Wit, voorzitter van de Financial Services-controlepraktijk, zijn visie op de nieuwe verklaring. In dit interview wordt stilgestaan bij wat de controleverklaring is, waarom deze er gekomen is, wat de ervaringen tot op heden zijn en hoe IT hierin terugkomt. Voor een aantal financiële instellingen is onderzocht of IT terugkomt in de controleverklaring en hoe de risico’s en werkzaamheden van de accountant worden verwoord.

Wat behelst de nieuwe controleverklaring?

Het doel van de nieuwe controleverklaring is om als accountant effectiever met de gebruiker van de verklaring te communiceren over de controle en de uitkomsten van de controle. In de verklaring gaat de accountant in op die aspecten van zijn werk die dit jaar bij deze specifieke onderneming het meest relevant waren. Daarom wordt de nieuwe verklaring ook wel de ondernemingspecifieke verklaring genoemd. De reden om de controleverklaring aan te passen is dat wij ons daartoe geroepen voelen als beroepsgroep. Er is een maatschappelijke uitnodiging om ons proactiever en toegankelijker te presenteren, duidelijker te zijn over waarom we welke werkzaamheden uitvoeren. In 2013 hebben wij als KPMG al bij enkele ondernemingen ‘geëxperimenteerd’ met de nieuwe controleverklaring. Vanaf 2014 is de richtlijn 702N verplicht gesteld voor alle organisaties van openbaar belang (OOB’s).

Wat is er nu zo anders aan deze controleverklaring?

De verklaring begint met de conclusie over de getrouwheid van de jaarrekening. Daarna wordt er meer duiding gegeven aan een aantal aspecten van de controle, zoals de materialiteit, de reikwijdte van de groepscontrole (de controle van groepsonderdelen in binnen- en buitenland) en de kernpunten in de controle.

Bovendien wordt in de nieuwe controleverklaring stilgestaan bij onderwerpen die naar hun aard en omvang en vanuit de risicogedachte het meest significant zijn geweest in de controle in relatie tot een kans op een materiële fout in de jaarrekening.

Waarom is deze verklaring er gekomen?

In Nederland is deze standaard in 2014 verplicht van toepassing geworden en daarmee lopen wij vooruit op de internationale uitrol die verwacht wordt in 2016. In samenhang met de nieuwe controleverklaring geeft de externe accountant steeds vaker een toelichting op zijn controle en controleverklaring in de algemene vergadering van aandeelhouders (AVA). De nieuwe controleverklaring is een belangrijke leidraad voor deze toelichting en biedt een goede basis voor eventueel aanvullende vragen van aandeelhouders.

De verklaring biedt meer inzicht in wat je als gebruiker van de jaarrekening aan de accountant hebt en op die wijze draagt ze positief bij aan de besluitvorming door de gebruiker van de jaarrekening. Dat komt bijvoorbeeld omdat de accountant in zijn verklaring wijst op bepaalde risico’s of onzekerheden die door de directie zijn betrokken bij de bepaling van schattingsposten of die in de jaarrekening zijn beschreven in de toelichting.

Door deze ontwikkeling gebeuren er twee dingen: de gebruiker krijgt meer zicht op het controlewerk van de accountant bij een specifieke onderneming en er kan een vergelijking worden gemaakt van het controlewerk van accountants bij verschillende ondernemingen in een bepaalde sector. Zo kan bijvoorbeeld een vergelijking plaatsvinden van de door de accountant gehanteerde materialiteit.

2014 was het eerste jaar van de verplichte invoering bij OOB’s. Wat zijn de eerste ervaringen hiermee?

De ervaringen zijn positief, er komt positieve feedback van directies, commissarissen, aandeelhouders en accountants. De meeste waardering bestaat voor het bieden van inzicht in de kernpunten van de controle. Hierdoor kan de gebruiker zien wat de belangrijkste aandachtspunten en de daarbij behorende risico’s waren en kan de vraag gesteld worden: wat heeft u daarmee gedaan?

KPMG-accountants zijn zich bewust geweest van het belang van een goede duiding van de kernpunten van de controle. Wij hebben zo veel mogelijk onze verklaring gestructureerd langs drie stappen: het controlerisico beschrijven, onze werkzaamheden om het risico te mitigeren beschrijven – waarbij we proberen zo concreet en duidelijk mogelijk te zijn in de specifieke context van de onderneming en te voorkomen dat we in ‘boilerplate’-teksten vervallen – en ten slotte, en dat is een aanvulling op de minimale voorschriften van het NBA, kleur geven aan onze observaties.

Doel van de controle is het onderzoek naar de getrouwheid van de jaarrekening als geheel, maar door onze observaties kenbaar te maken kunnen we extra accenten leggen waar gebruikers van de jaarrekening hun voordeel mee kunnen doen. Bij de jaarrekening 2014 van SNS REAAL, een financieel conglomeraat dat wordt ontmanteld, worden zes verschillende kernpunten van de controle uitgewerkt en is de controleverklaring in totaal acht pagina’s lang. Een van de kernpunten betreft de betekenis van de ontvlechting van het concern en de (IT-)risico’s die daarmee samenhangen en de wijze waarop dat in de uitvoering van de jaarrekeningcontrole is betrokken.

In aanvulling daarop zien we dat de toelichting door de externe accountant voor de AVA gemeengoed wordt. Dit komt tegemoet aan de wensen van partijen zoals de Vereniging van Effecten Bezitters (VEB) en Eumedion. En waar we in eerste instantie enige terughoudendheid zagen bij directie en commissarissen is thans de ervaring dat de externe accountant door dit optreden een goede bijdrage levert aan de kwaliteit van de AVA.

Er zijn ook gebieden in de controle die naar hun aard complex zijn voor opname als kernpunt in de controleverklaring. Denk bijvoorbeeld aan het continuïteitsrisico bij banken of aan risico’s op het terrein van integriteit en fraude. Hier moet de praktijk zich verder ontwikkelen.

Ten slotte is in het afgelopen jaar ook veel aandacht uitgegaan naar het voor de eerste keer formuleren van verklaringen en het zoeken naar de juiste boodschap en de verwoording ervan. Nu zien we dat de teksten over bijvoorbeeld IT of specifieke waarderingsvraagstukken nog grotendeels door de accountants zijn geformuleerd; de inzet van specialisten in de controle wordt wel beschreven, maar je hebt het gevoel dat dit nog niet in de woorden van de specialisten zelf gebeurt.

Hoe komt de aandacht voor IT terug in de verklaringen en wordt dit overal consistent toegepast?

Als we nu kijken naar de controleverklaringen van grote banken en verzekeraars, zien wij dat in het merendeel van de gevallen een kernpunt over IT is opgenomen. Hierbij wordt met name iets gezegd over de betrouwbaarheid en continuïteit van de geautomatiseerde gegevensverwerking. Uiteraard is dit een belangrijk onderdeel van de controle, maar de teksten zijn nog van een behoorlijke algemeenheid en weinig specifiek toegespitst op IT-aspecten. Ook in de meer businessgerelateerde punten zie ik weinig IT-componenten terug als het gaat om IT in relatie tot kernprocessen of bijvoorbeeld datakwaliteit. Ook op dit gebied valt er nog het nodige te winnen. Er is een begin gemaakt, maar dit moet zich verder ontwikkelen. Juist ook omdat technologie een heel belangrijke rol speelt in de business en operating models van financiële instellingen en daarmee zeer relevant is voor de verantwoording van de bedrijfsvoering en de controle daarvan. Denk bijvoorbeeld aan de ontwikkelingen op het terrein van betalingsverkeer voor banken; ik vermoed dat we daarover meer terug zullen lezen in de controleverklaringen van de komende jaren.

Op basis van een willekeurige analyse van een aantal controleverklaringen van zowel FS- als niet-FS-ondernemingen zijn wij nagegaan of IT als kernpunt genoemd wordt in de controleverklaring en of de verwoording algemeen is of juist een wat meer specifieke invulling krijgt. Uit onze analyse komt naar voren dat IT frequenter als kernpunt in de controleverklaring terugkomt bij FS-instellingen dan bij niet-FS-instellingen. De verklaring hiervoor kan zijn dat IT bij FS-instellingen een dermate belangrijke rol vervult in de bedrijfsvoering dat daardoor voor de controle in belangrijke mate gesteund wordt op IT. Buiten de IT om controleren is geen optie. Iets roepen over IT als kernpunt is dus voor de hand liggend.

Wat mij betreft is het zaak om het belang van IT als kernpunt te wegen naast andere onderwerpen die belangrijk zijn. Het moet niet een standaardautomatisme worden om het alleen maar om het melden te melden.

Waar we met name ook naar uitzien is het tweede jaar. Dan kan er voor het eerst een vergelijking in de tijd plaatsvinden. Daarbij zullen accountants antwoord moeten hebben op vragen van directies, commissarissen en aandeelhouders zoals: ‘Vorig jaar was dit punt ook opgenomen, welke acties heeft u genomen?’, ‘Vorig jaar had u dit nog als kernpunt geformuleerd, waarom dit jaar niet?’ Of: ‘Waarom was dat dit jaar een kernpunt en heeft u dat vorig jaar niet genoemd?’ Het belooft dus een interessant nieuw jaar te worden!

Drs. P.A.M. de Wit RA is voorzitter van Financial Services Audit KPMG Accountants.

dewit.peti@kpmg.nl

Met XBRL voorbij het singularity point

Interview: Brigitte Beugelaar

De digitale wereld is zich in een rap tempo aan het ontwikkelen. Met de opkomst van Big Data, cloud computing en het Internet of Things is de basis gelegd voor een verschuiving in de informatiehuishouding van documentuitwisseling naar data-uitwisseling. Ook de overheid speelt hierop in getuige de visiebrief van mei 2013 van minister Plasterk aan de Tweede Kamer over de digitale overheid 2017 en de aankomende wetswijziging van de Wet op de Kamer van Koophandel ([BZK13]). Door gebruik te maken van XBRL wordt een grote mate van efficiëntie en effectiviteit bereikt in die data-uitwisseling. Audit innovation is dan ook geboden, een ontwikkeling die erom vraagt de mening te polsen van de voorzitter van XBRL Nederland, professor Hans Verkruijsse.

Is XBRL nu echt de start van een revolutionaire ontwikkeling in de informatiehuishouding?

Ja, daar ben ik absoluut van overtuigd. XBRL zal een grotere invloed hebben op het zakelijk verkeer dan de introductie van internet heeft gehad op ons persoonlijk leven. Steeds vaker zal de informatie-uitwisseling plaatsvinden tussen geautomatiseerde systemen zonder menselijke tussenkomst. We staan aan de vooravond van het singularity point. Zoals ik ook al tijdens mijn oratie ([Verk10]) heb aangegeven, versta ik onder een singularity point een maatschappelijk omwentelingsmoment door de zeer snelle technologische ontwikkelingen. Na dat omwentelingsmoment zijn al onze geldende wetmatigheden ontoereikend geworden, zowel naar doelstelling als naar toepassingsmogelijkheid.

De specifieke eigenschappen van XBRL, het koppelen van de inhoudelijke betekenis van een gegeven aan het gegeven zelf, maken dat XBRL een enabler is voor een efficiënte en effectieve gegevensuitwisseling tussen geautomatiseerde systemen. De voor de mens noodzakelijke contextuele gegevens om een bericht te kunnen begrijpen zijn redundante gegevens voor de geautomatiseerde systemen en kunnen derhalve achterwege gelaten worden. Vaak wordt gezegd dat XBRL alleen maar van toepassing zal zijn voor jaarrekeningen. Dat verwijs ik direct naar het land der fabelen. Het gaat hierbij om het uitwisselen van gegevens tussen geautomatiseerde systemen en dan maakt het in principe niet uit wat de inhoud van die gegevens is.

Zal het gebruik van XBRL de driver zijn achter de nieuwe golf van audit innovation?

Dat zal inderdaad wel moeten. Nu worden verantwoordingsdocumenten zoals jaarrekeningen eerst nog opgemaakt in de versie die leesbaar is voor de mens, de papieren versie, om deze vervolgens te vertalen in diverse talen zoals Engels, Frans, Duits en in het XBRLs. Zolang de statutaire jaarrekening de papieren jaarrekening is, betekent het vertalen in XBRL alleen maar extra kosten. Immers, de verklaring van de accountant ziet op de papieren jaarrekening. Nu de Wet op de Kamer van Koophandel gaat wijzigen en jaarrekeningen gedeponeerd moeten gaan worden in het Standard Business Reporting (SBR)-formaat, dat is gebaseerd op XBRL, wordt de statutaire jaarrekening de XBRL-jaarrekening. Hierdoor is de statutaire jaarrekening niet langer direct leesbaar voor de mens. Slechts door rendering wordt deze weer leesbaar. Het object van auditing is dan ook niet meer een papieren document maar een niet voor de mens leesbaar document, hetgeen wel moet leiden tot audit innovation.

Is deze audit innovation te zien als een paradigmashift of een voortborduren op de huidige auditmethodologieën?

Ik ben ervan overtuigd dat dit een paradigmashift met zich mee zal brengen. Door de wijziging van het object van auditing zal deze paradigmashift gaan van het huidige ‘document level assurance’ naar de toekomstige ‘data level assurance’; of van een ‘getrouw beeld’-verklaring naar een ‘juistheid’-verklaring. Tenslotte is iedere verantwoording waarvoor de auditor gevraagd wordt zijn oordeel te geven een (toevallig) samenraapsel van gegevens. Als ik denk aan de veelheid aan verantwoordingsdocumenten die momenteel gevraagd worden, zoals jaarrekeningen, halfjaar- en kwartaalcijfers, subsidieafrekeningen, kredietrapportages etc., zullen veelvuldig dezelfde gegevens in de verschillende documenten worden opgenomen. Als dit vervolgens gedaan wordt in een XBRL-formaat, worden die verantwoordingsdocumenten rechtstreeks vanuit het aanleverende geautomatiseerde systeem zonder menselijke tussenkomst getrokken naar het uitvragende geautomatiseerde systeem. Het is daarom veel efficiënter om als object van de audit de gegevens te nemen en niet de documenten waarin de gegevens zijn opgenomen. Immers, de gegevens worden direct door het uitvragende systeem verwerkt, waarbij niet gekeken wordt naar het ‘beeld’ dat opgeroepen wordt door het samenspel van die gegevens. Hiermee vervalt dan ook de waarde van de ‘beeld’-verklaring.

Innoveren we dan niet terug in de tijd?

Ja, inderdaad, we gaan terug naar een gegevensgerichte audit. Was het object van de audit tot nu toe de informatie die verstrekt wordt door een organisatie, in de nabije toekomst zal het object van audit de gegevens zijn die door een extern geautomatiseerd systeem worden uitgevraagd. Niet de gegevens aanleverende organisatie bepaalt welke informatie goed is voor de gebruikers, maar de gegevens uitvragende gebruikers zelf gaan dit bepalen. Dit betekent een conceptuele verschuiving van een informatie-push-uitwisseling naar een gegevens-pull-uitwisseling. De veredelingsslag van individuele gegevens naar informatie vindt niet meer plaats bij de informatie aanleverende organisatie, maar in het gegevens uitvragende geautomatiseerde systeem. Het begrip informatie dient dan ook specifieker te worden gedefinieerd. De nieuwe definitie zal moeten gaan in de richting van: informatie ontstaat indien gegevens een bijdrage leveren aan het kennisbeeld van de mens of het geautomatiseerd systeem. Dat we hierbij het singularity point voorbijgaan moge duidelijk zijn. Wel zal nog onderzocht moeten worden welke wetmatigheden binnen de auditingtheorieën aanpassing behoeven. Zo zijn concepten en begrippen als materialiteit en significantie begrippen uit de oude doos.

Maar we gingen toch terug in de tijd, dat is toch uit de oude doos?

Ja, je hebt gelijk als je daadwerkelijk een volledige gegevensgerichte audit gaat uitvoeren op de basisgegevens. Maar de implementatie van XBRL moet niet geïsoleerd gezien worden. Juist door de koppeling van de inhoudelijke betekenis van gegevens aan de gegevens wordt het eveneens mogelijk te komen tot een veel efficiëntere en effectievere bestuurlijke informatieverzorging. Hierdoor wordt het toepassen van het concept van continuous monitoring mogelijk. De audit innovation zal zich dan ook richten op concepten als process mining waarmee vastgesteld kan worden of daadwerkelijk het proces van continuous monitoring heeft gewerkt, waarna een volledige gegevensgerichte audit kan worden uitgevoerd op de metadata. De stap die daarna komt is die van continuous auditing, en dat is een heel ander concept dan het wat vaker naar de klant gaan.

Al met al is dit veel meer gericht op data-analyse in optima forma en geeft het de mogelijkheid om daadwerkelijk ‘data level assurance’ te geven. Als dit geen audit innovation is, dan weet ik het niet meer.

Op welke termijn verwacht je dat dit gaat plaatsvinden?

Vergis je niet, het is al lang begonnen! Laten we eerst eens kijken naar wat de wetgever voornemens is. In het voorliggende wetsvoorstel voor een wijziging in de Wet op de Kamer van Koophandel staat dat kleine ondernemingen hun jaarrekening over 2016 moeten deponeren in XBRL, de middelgrote ondernemingen over 2017 en de grote en beursgenoteerde ondernemingen over 2019. De EU-transparantierichtlijn geeft aan dat de jaarrekeningen van beursgenoteerde ondernemingen in de gehele EU over 2020 gedeponeerd moeten worden in een formaat dat omschreven wordt als een XBRL-formaat.

Mijn visie is dat de start van de behoefte aan audit innovation al heeft plaatsgevonden, nu moet dat gevolgd worden door de start van de ontwikkeling van audit innovation.

Literatuur

[BZK13] Ministerie van Binnenlandse Zaken en Koninkrijkrelaties, Visiebrief Digitale overheid 2017, 20013, www.rijksoverheid.nl/documenten-en-publicaties/kamerstukken/2013/05/23/visiebrief-digitale-overheid-2017.html.

[Verk10] J.P.J. Verkruijsse, Bestuurlijke informatieverzorging: Na het singularity point een nieuwe glanzende toekomst?, inaugurale rede Tilburg University, 2010.

Prof. dr. J.P.J. Verkruijsse RE RA is hoogleraar Bestuurlijke Informatieverzorging en directeur van de International Post Master Accountancy-opleiding aan de Tilburg University. Hij is voorzitter van de Raad van de Beroepsethiek van de NOREA, voorzitter van de Raden van Toezicht inzake betrouwbaar administreren, voorzitter van XBRL Nederland en lid van de Member Assembly van XBRL International Inc. Tevens is hij international research director bij het Global Accountancy Transparency Institute en doet hij onderzoek naar continuous monitoring en auditing. Hij is editor voor het blad Journal of Information Systems. Jarenlang was hij internationaal (IFAC/IAASB/IAESB) en nationaal (CCR) betrokken bij het opstellen en uitvaardigen van regelgeving voor accountants. Hij was 21 jaar partner bij Ernst & Young Accountants en promoveerde aan de Universiteit Maastricht.

Data Driven Dynamic Audit

Organizations act in a very dynamic and international environment. This requires a financial auditor to constantly adapt to new developments. In addition to automating the financial audit workflow (e.g., electronic audit files), auditors usually begin by performing some data analytics of transactions (e.g., analysis of journal entries): a good step forward which, for example, helps comply with the increasing number of regulatory requirements. However, quite frequently the financial audit approach is not updated or adjusted and the real benefits of the application of data analytics in a financial audit context are not fully exploited. Therefore, a new approach is called for, which we refer to as the “data driven dynamic audit”.

Introduction

Data analytics aiming to enable a more effective and efficient audit is becoming more and more common in (financial) audit approaches from both internal and external auditors. However, the application of data analytics in today’s financial audit approach is mainly focusing on the analysis of routine transactions in the company, e.g., transactions and controls in the purchase-to-pay-process and on the analysis of manual journal entries. Although a further embedding of this type of data analytics in the end-to-end financial audit approach remains necessary, much progress still needs to be made when it comes to developments to engage with data analytics in the non-routine transactions of a company. This includes the application of data analytics in the areas of risk assessment, impairment testing, benchmarking, anti-fraud procedures, going concern, etc.

This article explores several developments in the sources and types of data which can be used in the financial audit and how these may impact the audit approach and the auditor.

Data Driven Approach

Both internal and external auditors have a wide variety of data available that can be relevant from a financial audit point of view. In most cases, however, the auditor focuses on data that is available in the organization itself. Generally this data is from a ‘known’ and structured source, e.g., the organization’s ERP environment or Business Intelligence environment.

The (data) landscape is changing and growing in the sense that organizations are also exploring the possibilities of utilizing data from outside the organization’s perimeter, with a view to improving the management’s view and creating new business opportunities. This change in the use of data as part of the daily operations in an organization also has an impact on the data and information the auditor needs for his financial statement audit. The financial auditor will not only need to join the data journey within the organizations, but also needs to include externally available data to implement a data driven dynamic audit.

Over the last few years, the effect of the changing data landscape on the audit approach has gradually increased and more and more auditors are embracing these technologies, increasingly using information technology in the audit. However, as was noted in a recent American Institute of Certified Public Accountants (AICPA) whitepaper: “For the most part, IT has been used to computerize and improve the efficiency of established processes rather than transform or replace them. Consequently, improvements have been incremental rather than transformative” ([AICP14]). Accordingly, a lot of work still needs to be done to transform the audit rather than improve on what auditors have been doing over the last 50 years.

When we take the example of a medium-sized retailer with a relatively simple IT backbone and automation, the financial auditor will probably rely on his own knowledge of the industry and a structured balance sheet and income statement based on the organization’s ERP system to perform the risk assessment and plan for the audit procedures. However, the use of public data can also have an impact on how we interpret risks residing in the environment of the organization, even more than in the case of using internal data alone. Numerous books and articles have been written on subjects such as the impact of social media on the business of a retail organization (e.g., how to address complaints made via social media). If, however, social media influence the retail organization, why should not the auditor use this data to plan his risk assessment procedures for the financial audit?

To better understand the different categories of data and the relevance of these categories for the financial auditor, a Data Category Framework has been defined. This framework (see Figure 1) includes the data available within a company as well as externally available data. In addition, the framework links structured (e.g., databases) and unstructured (e.g., video, social media) data to either category. The four individual elements are described below.

C-2015-3-Veld-01

Figure 1. Data Category Framework.

Structured Internal Data

This category of data typically refers to the data sources managed, owned and created by the organization itself and contains the transactional data that is used in the core business and supporting processes of an organization. The data life cycle from creation to destruction normally follows a predefined, structured path, determined by e.g., the company’s policies and procedures, in combination with the way the system has been set up. Examples include the transactional data stored in the ERP system (AP/AR invoices, Purchase Orders etc.), customer data stored in the organization’s CRM environment or personnel data stored in the HR system. We refer to this data as structured internal data, irrespective of where the data is physically stored – be it in the cloud, on the premises or with a hosting provider.

Example of using social media data in impairment testing

An international retail organization buys its own production forest to ensure the delivery of high-quality wood for its own production. The forest, however, is adjacent to a protected wildlife area. Threats to occupy the production forest have recently been posted on social media by activists claiming that the production forest threatens the fragile wildlife in the nearby protected area. Should the financial auditor be aware of such threats for his audit of the impairment test on the biological assets of the company? It is very much the question whether the financial auditor will be able to adequately audit the impairment test by solely using the data available within the organization.

For the auditor, structured internal data is a rather convenient category. This type of data has been used as part of the financial statement audit for decades and is usually the basis for all testing procedures. As a rule, this category of data is also the starting point for applying data analytics procedures as part of the audit approach (data analytics for automated control testing, manual journal entry testing etc.). Opportunities for further enhancement of data analytics routines in this area lie in using this data to drive the internal aspects of the risk assessment, performing analytical procedures as part of the planning phase of the audit, further optimizing the use of this data in both interim and year-end audit procedures (e.g., the application of process mining tooling as part of the process walkthroughs) as well as using this data for predictive analytics.

By using suitable analytical procedures, the results of the analytics on this type of data can provide almost complete assurance over the correlation between the cash and goods movements, for example, limiting the additional requirements to perform substantive test procedures in the year-end audit. See also the article regarding the goods/money flow data analytics by Bram van der Heijden and Satiesh Bajnath in this issue of Compact.

Unstructured Internal Data

Presumably, even the larger part of the data residing within an organization can be categorized as unstructured internal data. Estimations are that 80% of companies’ data is unstructured ([Wiki15]). This category of data is known by the fact that it does not normally follow the regular data life cycle flow; no predefined policies or processes are in place to govern the data from creation to destruction and the systems supporting this category of data are usually more like data sharing platforms. Examples of this category include SharePoint environments, an organization’s internet and intranet websites, videos, PowerPoints, emails, firewall traffic, etc.

From an auditor’s point of view, this type of data can be of high interest for the financial statement audit, as these data sources normally provide the guidance on the organization’s processes, input for the risk assessment as well as evidence for the control testing (e.g., approval emails, process descriptions).

Examples of unstructured data in the audit

Example 1

A listed company has created a SharePoint environment to document its Risk and Control Matrices (RCMs) as well as the test procedures for its Internal Control Statement. Over time, such a SharePoint becomes a valuable source of data for the auditor as it can include all kinds of approval emails, procedures and work instructions, results of the control testing etc. By applying data analytics procedures (e.g., advanced text analytics), the auditor can gain insight into possible gaps in the risk assessments that have not been addressed in the RCMs.

Example 2

In case of a serious security hack (the Sony hack in 2014, for example) including information leakage of the company’s IP, the firewall log data or email traffic can be analyzed to identify possible unwanted traffic and to evaluate the risk of a devaluation of a company’s assets, for example in the case of unwanted access to intellectual property. In a scenario of identifiable personal data leakage, the auditor should also be aware of possible claims or fines and, as a result of this, of the potential impact on accounting (provisions) and financial statement disclosures.

Structured External Data

The use of structured external data is already being explored in the audit domain and is characterized by the fact that the data life cycle follows a path similar to that of internal structured data, except that the data is not owned or managed by the organization itself. This category of data includes data coming from, for example, analysts or subject matter experts, industry data, going concern analytics or from the organization’s supply chain environment.

This data category is very promising, not only from an auditor’s point of view, but also for the organization itself. However, the application of data analytics using this category of data in a financial audit setting is limited.

Examples of unstructured data in the audit

Example 1

WoodMackenzie is an international intelligence organization specializing in analyzing data within the energy, metal and mining industries. For oil fields around the world, WoodMac created several industry databases that include detailed data on the volume of their production capacity, possible future decommissioning dates etc. Organizations themselves hardly use this type of industry data as they have it in their own systems. The external auditor, however, can use this external data in evaluating the oil wells for the purpose of challenging the organization in this process, and specifically in identifying the potential impact on the financial statements (e.g., asset valuations and reserves disclosures).

Example 2

When performing a supply chain audit, the auditor normally relies on data coming from the organization’s own ERP environment. Opportunities to increase assurance throughout the value chain, however, lie in combining the internal data from the company’s ERP environment with the external data of its suppliers (sales of organization A should be equal to the purchase of organization B). However, it would require the full co-operation of the supplier to gain access to the data.

Unstructured External Data

The last data category – unstructured external data – can likewise be utilized for the audit. This category of data follows a completely different life cycle path, can be created ‘by the many’ and ‘for the many’, comes from data sources normally irrelevant to the financial audit (e.g., radio and TV) and in some cases does not even have a clear source of the data (e.g., social media).

To perform data analytics on this category of data, an auditor needs to be equipped with more than just basic analytical skills to analyze, process and assess the usability of this data as part of the (financial) audit. He needs to invest in (advanced) tooling to retrieve and process the data, he needs to develop analytical skills to analyze and interpret the data and he needs to understand the impact of this data on the audit as well as his profession.

The current technology or the availability of adequate tooling to analyze the data is not primarily the issue. Social media or news analysis solutions (e.g., Bottlenose, Owlin) are already available in the marketplace. The challenge rather lies in convincing the auditor to use this kind of data and to use these insights for the purpose of meaningful information for the audit approach.

Example of unstructured data in the audit

A traditional financial audit starts with a risk assessment and planning phase at the beginning of the year. Risks are normally determined on the basis of last year’s audit, the insights in the organization and sector knowledge of the audit team. Whilst this information used to be sufficient to drive most risk assessments, the information used was rather limited and based on static information. By using social media data and news analytics, for example, the auditor can analyze more news and information streams with a view to understanding specific risks or trends for the company. Furthermore, this can be done not only at the beginning of the audit, but on a continuous and automated scale, enabling the auditor to adapt audit procedures whenever necessary and provide the organization with insights of its environment that it might not even be aware of.

Example of an external unstructured data analytics solution: Bottlenose

Bottlenose is a web-based unstructured data analytics solution that uses complex algorithms to analyze large amounts of streaming data. This ranges from Twitter and online news-feeds to radio and TV broadcasts. On a daily basis, Bottlenose analyzes over 72 billion data records. Such an enormous amount of data can never be analyzed by one person, Bottlenose provides unique opportunities in a (financial) audit setting.

Let us revert to the earlier example of a retailer who owns his own production forest. By using an online and live stream analytics solution such as Bottlenose, the organization as well as the auditor can immediately spot any potential threats to its biological assets once a message has been posted online (and goes viral). Rather than waiting for the actual damage to be done and being too late in identifying the impact on the valuation of the assets, the organization can immediately point out this development and act upon the potential impact.

C-2015-3-Veld-02

Figure 2. Bottlenose.

Impact on the Financial Audit

All these new and improved capabilities and potentialities of data analytics as discussed in the previous section will have a significant impact on financial statement audits and open the door to further audit innovations. When determining the potential impact of the aforementioned data categories on the innovation of the audit approach, we can cluster this in the following brackets of improved quality:

  • Quality of execution
  • Quality of insights
  • Quality of opinion

In this paragraph we will describe the audit innovations in all three.

Quality of Execution – Towards a Dynamic, Tailor-Made Risk Audit Approach

Traditional financial statement audits are planned and performed following four sequential steps (see the Audit Methodology shown in Figure 3). Every audit starts with audit planning (including a risk assessment) and continues with controls and substantive testing, after which the audit findings are evaluated and the audit opinion is issued. The control evaluation is typically performed in the fall during the interim audit, while the substantive end-of-year testing procedures are performed after the balance sheet date.

C-2015-3-Veld-03

Figure 3. KPMG Audit Methodology.

New capabilities and potentialities of data analytics can have a major impact on the traditional audit procedure. The availability of internal, structured transactional data has already had an impact on current financial statements audits. It allows audit teams to switch – among other things – from a control-based audit approach to a substantive audit approach with multiple testing periods throughout the year while testing large transactional processes such as purchase-to-pay or order-to-cash transactions. The more powerful the data analytics tooling, the more the financial statement auditor will be able to identify and follow up on potential errors in the data analyzed at an early stage. This will allow financial statement auditors to tailor the audit approach specifically to the transactions with higher risks soon after the transaction has taken place, allowing auditors to perform a dynamic, tailor-made risk audit.

Quality of Insights – Towards an Enhanced Audit

In addition to audits of transactional data (internal structured data), the developments in data analytics methodology and tooling also enable auditors to use external and unstructured data in financial statement audits. The purpose is to perform risk assessments, external benchmarking, challenge the valuation of certain assets, identify misbehavior of employees of the audit client and to perform anti-fraud procedures. As yet, this type of data is not common in the majority of the current financial statement audits. Extending data analytics in the audit by using external and unstructured data will allow auditors to enhance audit insights and provide new and fresh perspectives for the audited companies and, if relevant, the users of the auditor’s reports.

Quality of Opinion – Towards a More Skeptical Audit

Improved data analytics techniques allow auditors to change the audited sample of transactions and/or challenge the management’s statements in a different way. Regulators and users of audits are demanding for professional skeptical auditors, and data analytics can enable the external financial statement auditors to change their audit approach and further challenge audited organizations in a different way. The use of data analytics for the audit of internal transactional data enables auditors to audit the entire population of processed transactions instead of the review of samples and/or internal controls over the transactional processing. Exceptions and errors can easily be identified and followed up based on the data analytics performed. Furthermore, the incorporation of data analytics on external and unstructured data in the audit will help auditors improve the quality of opinion. Likewise, external data can be used to challenge the management’s assertions. Any claims by the management with regard to asset valuations and antifraud-related issues are now typically supported by internal data, and can be challenged with the help of external data by implementing the data analytics techniques and tooling mentioned in this article.

Transforming the Financial Audit into a Data Driven Audit

The auditor can only successfully transform the audit by using (advanced) analytics and combining all different types of data categories in the audit. Risk of material misstatement on the valuation of a company’s assets can be better determined if risks residing outside organization are also taken into account. However, the success of this transformation does not depend on more activities within the framework of the financial audit, but rather requires a change in how existing tooling and technology are being utilized by the auditor to perform an effective and efficient audit. In other words, a change in the behavior of the auditor is more likely to be the critical success factor in this transformation. The tools that can drive this transformation are already available. The question is how, and to what purpose the tooling in the audit is used, and how much time the auditor spends using data analytics in the audit.

When plotting the actual time spent by the auditor when applying data analytics techniques in the financial statement audit, it turns out that most of the time is in fact spent on analyzing routine transactions (e.g., journal entry testing, automated controls within the purchase-to-pay process etc.), using internal structured data (see Figure 4).

The change in the behavior of the auditor would require him to flip the time pyramid by actually spending more time on non-routine transactions and reduce manual efforts in test procedures for routine transactions. After all, if the organization can automate its routine processes altogether, why should not the auditor follow the same approach and automate the entire routine testing? An additional side effect of this change is to further unlock the highest added value of the auditor, which is auditing the non-routine transactions rather than the routine transactions.

C-2015-3-Veld-04

Figure 4. Possible impact of a Data Driven Dynamic Audit Approach on the efforts of the auditor.

The auditor cannot flip the pyramid just by himself, however. Once the auditor has made up his mind to change his approach into one where he performs manual audit procedures only where he cannot (yet) use data analytics, you could also argue that the auditor needs a different eco-system to be successful in applying data analytics on non-routine transactions. The non-routine transactions represent non-standardized processes, more often than not with a relatively high risk profile. To test such transactions, (costly) advanced data analytics would be required using tooling which as yet is not available to all auditors. In addition, it would require significant investment for any audit firm to develop such tooling and to train the auditors on how to apply such data analytics procedures. To design and build tools that support advanced data analytics, one would also need to have extensive expertise on the subject, and needs to understand how to perform analyses using automated tools and techniques. Such an ecosystem should combine external subject matter expertise (e.g., the academic world or specialist organizations such as WoodMac, referred to in the example above) with the right tools (e.g., partnerships with Google, Microsoft, IBM etc.) as well as the relevant experience of one’s own organization and the auditor.

Conclusion

The possibility of using data in the audit is not limited to internally structured data only, but also extends to external or even unstructured data. Advanced data analytics using unstructured external data has already proven to be a successful element to incorporate, for example, a continuous risk assessment in the audit, by using the external point of view (e.g., benchmarking) rather than the internal point of view only. We believe that these data categories should not be considered to exist in a vacuum, but should be combined in an overall audit approach, providing valuable, data driven insights into the quality of opinion, quality of insights and quality of execution of the audit. In addition, audit firms are also recognizing that they cannot do this all by themselves, and are building EcoSystems with third parties to be able to jointly transform towards the data driven dynamic audit.

References

[AICP14] P. Byrnes, T. Criste, T. Stewart and M. Vasarheyli, Reimagining Auditing in a Wired World, AICPA 2014/8.

[Wiki15] Wikipedia, Unstructured data, https://en.wikipedia.org/wiki/Unstructured_data, 2015.