Skip to main content

Surviving extreme scenarios in a digital reality

Current measures for Business Continuity and Disaster Recovery are not fit for (future) purpose. Most of these measures are focused on old school natural disasters, physical threats, and “micro” operational IT issues. They are not prepared to deal with rapid spreading, large scale and destructive cyber disasters, such as virus outbreaks. Something we have witnessed many times in recent years and the digital threats have only evolved faster and very likely will continue to do so. The world has changed in the face of digital transformations and hyper connectivity. The approach to survive these rapid, destructive, and wide-scale incidents (one could compare these to human pandemic-style events) needs to change as well.

Introduction

New problems often can’t be solved by old solutions. That’s why we need to rethink the concept of cybersecurity in a hyper-connected digital reality when it comes to surviving extreme scenarios, such as rapid spreading and destructive cyberattacks. Large multinational corporations that play in the proverbial Champions League of hyper-connected business need state-of-the-art preparations to optimize chances of survival when serious “shit hits the fan”. This goes (far) beyond conventional Business Continuity and Disaster Recovery. Taking this topic seriously requires unorthodox thinking and doing. Let’s step outside the box and prepare for survival.

For the sake of reasoning, let’s assume you are comfortable with the cyber capabilities of your organization. But what if your organization has to deal with a serious attack similar or worse to the infamous NotPetya attack that took place on 27 June 2017. That day, NotPetya halted Maersk’s global operations for multiple weeks due to the fact it destroyed nearly all servers and endpoints, costing the transport, logistics and energy conglomerate at least 300 million dollars of direct costs related to outage and recovery, let alone the indirect costs. This surely is not a standalone incident, nor was it the largest damage. Estimates about the combined damage are in the range of 20 billion dollars on macro level, and up to 800 million dollars for a single pharmaceutical in the USA.

Now here’s a question for you:

How long would it take you to shut down vital parts of your IT landscape to make sure that malware cannot spread any further and to make sure that – at least most of – the crown jewels of the company will be saved to jumpstart operations after the attack?

Would you be able to do so in a day? In a couple of hours perhaps? Half an hour? Would you be able to obtain all the mandated approvals and if so, the corresponding actions in such a short time?

The confronting reality is that NotPetya distributed itself across entire organizations in less than 8 minutes from East to West, from North to South. It was spreading much faster than at first noticed. As said, many others were affected that day, in hindsight tens of thousands of companies were hit by the attack. The consequence: human control over these kinds of pandemic cyber events are simply no longer effective; we need to automate both our protection as well as our response capabilities.

This is one of the key arguments for a radical new approach. The key topic in this article is not how to build the most effective and efficient framework for cyber security, it’s what your organization is capable of in case of a destructive cyber-attack.

One could use the analogy of a pandemic human virus to grasp the impact. When Corona hit China, there was uncertainty on the impact, but at least most of us realized how impactful it could be. We should think about cyber incidents in the same way. The stakes are high in terms of losing trust and financial robustness when your infrastructure and data are gone for weeks (or even forever).

It’s clear that we need disruptive thinking to deal with the next wave of sophisticated cyber-attacks.

Anatomy of cyber security in the digital reality

Organizations operate in a so-called VUCA world, a Volatile, Uncertain, Complex and Ambiguous world. The capabilities to succeed in such a world have changed dramatically, and this is certainly the case in the domain of cybersecurity.

One of the challenges is that we live in a world of hyper-connected ecosystems. The Internet of Things grows at an astonishing pace. Everything connects with everything, and it doesn’t stop in the digital world; it is also strongly tied to our physical world. Physical infrastructures ranging from traffic lights, production facilities, trains and many others are not just physical assets: they have an underlying digital backbone. A growing interconnectedness of the world also causes exponential growth of the impact of breaches or failures – like a “ripple effect”.

Despite improvements in software security, system hardening, security awareness, and incident detection/response, the attack surface of large organizations is expanding. This is mainly due to system and service interdependencies and the increasing reliance of business on IT services, including cloud, business managed services (self-procured SaaS), and interconnections with multiple domains and supply chains.

Another important factor is the fact that the effectiveness of attacks by perpetrators is enhanced and accelerated by increased specialization and toolchain availability. Added to that are advanced capabilities of organized cybercrime and nation state activity. An example of this is the fact that WannaCry utilized a weaponized NSA tool made publicly available due to a hack or nation state leakage a few months earlier. The devastating results are still observable around the world.

To conclude, business reliance on digital assets grows day by day. A massively disruptive event threatens the sheer existence of an organization given its reliance on IT assets. Firmware level flaws, like Meltdown/Spectre, can affect a very extended set of assets (i.e. motherboards, hard drives, CPUs, etc.) and disturb a target base that can be as wide as the entire global infrastructure.

The brutal – and somewhat inconvenient – truth is that current measures for Business Continuity and Disaster Recovery are not fit for (future) purpose. Most of these measures are focused on “old-school” natural disasters, physical threats, and “micro” IT issues. They are not prepared to deal with large scale cyber disasters. Additionally, the world has changed dramatically in the face of digital transformations with Cloud, Ecosystems and hyper connectivity.

Enterprise-wide Business Continuity and Disaster Recovery has fallen off the radar. Leaders should put it back in focus, updated to match the current world, and with an eye for the potential next wave of unpredictable cyberattacks. Attacks that most likely will be stronger, faster and deeper.

This is not about you?

Many organizations have stepped up efforts in the domain of Business Continuity and Disaster Recovery. They may argue that the aforementioned conclusions are not valid for them. Let’s briefly analyze if this holds true.

“We have active-active data centers in place.”

Many organizations have active-active clusters (mirroring), a set-up with two or more hardware systems that actively run the identical platform/systems/data at mirror sites simultaneously. However, this is not at all a guarantee for continued operations, as viruses spread across multiple mirrored systems in an instant, as that is what they are designed for to do.

“We use backup tapes.”

Are you sure? Backup tapes are not that safe either. Not only because the sheer volume of digital activities simply makes it impossible to use tapes, but also because the tapes that were recovered resulted in the larger hacks in history, as they were already infected. It is often very hard to find the point where the infection / hacker is not present. In addition, a key point here is that most “tape” backups are nowadays put on disks, as part of the storage solutions. Of course mirrored as well, but that doesn’t help you.

“We have stringent contractual / service level agreement clauses.”

Legally, you may have covered it well in contracts with service providers. But how important is the paperwork when it comes to living up the promises in case of a pandemic order? When an entire region goes down, which organizations will have priority? In many cases, vital infrastructures like fuel supply, power, emergency services and banking will have priority over your urgent needs.

“We have data ownership.”

Data ownership is only relevant when partners are still active in business. In case of large disasters, they could face bankruptcy or get suspension of payment decision. All assets may then be locked, and access denied. A horror story? A fact of life, as it already happening at several organizations who were the legitimate and legally confirmed owners of data but didn’t get access (even not via court).

Next level survival management

Research from recognized cyberthreat and intelligence organizations such as Verizon, Mandiant and ISF, indicate that the next wave of cyberattacks will likely be more sophisticated. Attackers will be better equipped, more intelligent and will recycle successful methods that were successfully applied earlier, a trend which was also recently reported in the Europol EC3 cyber update 2019 ([Euro19]). As it has been a while since there was a large global cyberattack (2017 for instance), we should expect something big to happen. The typical question in the cyberworld is not if it will happen; rather when and where it will happen.

It is a fact that there are no waterproof solutions to prevent us from these attacks as whatever you do, perpetrators will always find ways. The least we can and should do is raise the stakes in survival management. This calls for disruptive thinking. Next, we will present four “outside the box” thoughts on survival management.

1 Next level last resort preparations

Organizations need to have a clear view on what they actually need to restart operations after a disaster. In other words, they need to know their crown jewels and which parts of their digital ecosystem are vital for being and staying in business. This may be a basic, but often neglected topic. This is actually why it needs to be on the agenda of the board, top down, it should not to be dealt with solely by IT.

Again, it should be noted that the analysis is to understand what is required to restart the company after the cyberattack. It is not the intention to “save” everything, this is all about survival preparation and not “regular” business continuity.

With this top-down, business-driven analysis, organizations can step up efforts in multiple domains to increase the chances of survival.

One is the concept of Infrastructure as Code for vital parts (see box for a definition). We’ve witnessed a tremendous rise in the use of agile methods to develop new systems and applications in the past decade. However, the deployment phase is often still quite traditional and slow. Technically, it is feasible to deploy systems and applications on new infrastructures very quickly (to rebuild the company). Infrastructure as code uses a descriptive model to do so. In the simplest analogy, one could say that it is an apple pie recipe. You know the ingredients, the steps to be taken and the tools you need. Therefore, the best way to guarantee future apple pies is not to store the pies, but to keep the recipe safely.

Infrastructure as Code, or programmable infrastructure, means writing code (which can be done using a high-level language or any descriptive language) to manage configurations and automate provisioning of infrastructure in addition to deployments. This is not simply about writing scripts, is also involves using tested and proven software development practices that are already being used in application development. For example: version control, testing, small deployments, use of design patterns etc. In short, this means you write code to provision and manage your server, in addition to automating processes.

Source: [Sita16]

Another is the concept of Islands of Recovery. This concept is not about what is needed to guarantee a full-scale continuity of the business, but what is required in order to avoid shutting down businesses completely due to complete loss of infrastructure and data. As a plus, the Islands of Recovery can also support businesses at regional level during incidents. To achieve this, we need to carefully consider how these islands are connected and secured.

Examples of protection measures for the islands of recovery include:

  • Full segregation from the production environment;
  • Mono-directional traffic for system/data updates (i.e. via network diodes);
  • System hardening that “ignores” system functionality requirements; nobody changes the functioning of the survival mechanism, not even the business owners. Self-survival is key and established by the system itself (“think it as a protection mechanism against humanity”)
  • Preservation of a reduced set of required users;
  • Preservation of “vital” data;
  • Redundancy of external service providers; What is normal in the “traditional” world – having multiple bank accounts or credit cards – should be the norm in digital as well.
  • Hardware/software and IT administrator diversity.

2 Rapid isolation of zones

In the physical world, segmentation is a proven concept to limit the impact of floods or other disasters. The digital world can learn from this by thinking in blast zones1 like rapidly shutting down zones to prevent further spread of problems. Shutting down zones very fast would limit the impact of a problem in a very effective way. When a problem becomes manifest in a London office, local measures could be taken first. Should this be too late, the entire UK infrastructure could be taken down. Only when this measure is too late, would we need to shut down other country infrastructures. One could compare it with isolating a faulty circuit. In the physical world, we wouldn’t accept it if a problem in one of the circuits in a building would cause a shutdown of het whole building. Likewise, we shouldn’t accept that a single problem in an IT system should cause all software to collapse. Shutting down blast zones can prevent this from happening, when applied extremely fast and without hesitation. To be clear: only automation (in control) by computers can do this.

Furthermore, existing threat detection mechanisms have limitations in identifying threats in a timely manner (where timely is in the order of a few seconds – max a minute on a global scale). In this domain, we can use the analogy of the canaries in the coal mine to improve response actions. Miners would carry down caged canaries into the mine tunnels with them. In case of dangerous gases such as carbon monoxide or H2S, the gases would kill the canary before killing the miners. This would mean immediate escaping via the tunnels and potentially drop barriers. As a similar approach, we can implement digital canaries in systems. If organizations compartmentalize resources at high speed (that is including the decision to do so; hence highly automated, machine powered, AI …) they can protect themselves against serious issues. One could place digital canaries (a file, a process, etc.) on several servers and monitor if these files or processes are accessed, modified or deleted (while they are normally not touched). By doing so, organizations will get early warnings on anomalies and with that get early warnings on potential threats.

3 Radical testing

Organizations have often implemented a multitude of measures to deal with disturbance of systems. However, when it comes to testing if the fall-back scenarios and other measures actually work, they are very cautious and only controlled tests are performed (if at all, as many organizations do only predominantly paper-based exercises once a year, as a compliance tick mark). This must change. The automated (at any time) shut-down of random servers and services should be the new normal to test if measures are fit for purpose. Organizations should not be afraid of this type of vigorous testing, even if this upsets business managers. It’s a matter of being bold and just doing it without any dialogue. Only by experimenting with disturbed (distributed) systems, can we build confidence in the infrastructure’s robustness to withstand turbulent conditions.

4 Don’t forget the offline world

Storing vital data on paper? Of course, in most cases, this is not a viable option given the volume of data, but it still happens as backup. However, let’s recall that we are talking about the minimal needs for survival and organizations should therefore consider if there is a minimum set of information that should be kept in safe zones on paper. This information could be the bare minimum to get operations back in business when disaster strikes. The nature of this information strongly depends on the organization and should be tailored to the nature of its operations. It may sound like an extremely simple and old-fashioned measure, but nonetheless an often overlooked and indispensable measure.

Conclusion

Experts agree that the next wave of cyberattacks will be unprecedented in sophistication. This means that current measures will probably not be able to preserve continued business or even minimal operations. For too long, organizations, frequently led by their advisors, have relied on old-fashioned BC/DR measures.

It’s actually very probable that malware is already present behind the scenes and will “ignite” in the (near) future. Leaders should be aware and should be openminded to rethink how they deal with cyber disasters. Extreme response speed and vigor is essential. Some organizations – mostly the generation that was “digitally born in the cloud” – excel in this area. Others can’t keep up and some already point out that their infrastructure is not suited for this new era. This may be an understandable plea. But it can never be an excuse.

Notes

  1. Also known as “hazardous areas”.

References

[Euro19] European Cybercrime Centre (EC3) (2019). Internet Organised Crime Threat Assessment (IOCTA) 2019. Retrieved from: https://www.europol.europa.eu/iocta-report

[Sita16] Sitakange, J. (2016, March 14). Infrastructure as Code: A Reason to Smile. Retrieved from:  https://www.thoughtworks.com/insights/blog/infrastructure-code-reason-smile

Risk/control framework related to the security of mobile business applications

There is an increasing number of business applications on corporate mobile phones these days. There are threats to consider and risks involved when releasing corporate financial or other types of sensitive information on the go. Enterprises are responsible for mobile application and device security in their day-to-day business. In case of a loss of sensitive organizational data, management is held accountable. This article explores the risks surrounding the use of mobile business applications by employees and the measures organizations can take in order to become and stay in control.

Introduction

Organizations across the globe are developing or using mobile applications in order to increase employee productivity. Mobile technology fulfils an increasingly important role within business processes. Users have been familiar with mobile access and apps for a while to retrieve real-time information anywhere at any time. It is therefore no surprise that more organizations are providing access to financial and customer information via mobile devices.

As a relatively new way of working, this may pose additional challenges (e.g. lost/stolen mobile devices, mobile security) and consequently, result in security risks. In addition, already existing challenges such as malware, employee security awareness are also still applicable and cannot be neglected. Therefore, a strong focus on mobile application security and the accountability of management for the underlying risks is required. When identified risks are not addressed sufficiently through detailed risk assessments and implementation of a coherent set of security measures, sensitive financial data may become compromised. Using mobile applications in organizations raises the question whether additional measures – in comparison to regular online business applications – are needed to cover the security risks related to mobile business applications.

Recognizing these risks, organizations will become more aware of their responsibility for mobile application and device security. A risk/control framework will help organizations to guide their approach to control all risks involved and to be able to continuously test and update the security posture. This article focuses on securing mobile business solutions by determining which risks exist and how these could be controlled.

Relevance of securing mobile business applications

Mobile devices have evolved from merely providing access to enterprise e-mail to removing logistical process delays and tracking business transactions at any time through the use of mobile business applications. For example, these applications provide the organization the ability to review, approve or reject purchase requisitions and supplier invoices, continuously follow their order backlog and keep track of their Order-to-Cash and Purchase-to-Pay process performance trends in real-time. Mobile applications are rapidly becoming the primary – and maybe even single – communication channel for customers and employees and therefore the applications are expected to be secure and to protect privacy ([Ham17]). Despite numerous positive and useful features/benefits, the use of mobile applications in organizations also poses certain threats. In the context of this article, we ought to identify the (f)actors involved when using mobile business applications to identify potential threats. The actors identified are shown in Figure 1: the user, the mobile device, mobile application, the internet, firewall, the corporate network, webserver, and database.

C-2020-1-Demoed-01-klein

Figure 1. Actors mobile business applications in a highly simplified figure. [Click on the image for a larger image]

The main threats taking the involved actors into account are discussed next.

Unprotected applications or networks

When network gateways are placed at easily accessible locations and include unpatched vulnerabilities, it may enable hackers to control the network gateways and intercept communications ([Enis17]). If so, it could lead to severe data privacy violations associated with the applicable GDPR legislation. Incidents may occur where sensitive organizational data is lost, which would have a significant negative impact on an organization’s reputation leading to financial damage. The implementation of the bring your own device (BYOD) policy at organizations acts as a complicating factor, as personal devices have diverse interfering applications installed.

Mobile security unawareness

Employees are considered to be the weakest link in information security ([Line07]). When there is limited awareness among employees regarding the security of mobile devices and the classification of data that is stored on devised and in the applications, this could bring about security incidents. For example, when sensitive organizational data is sent to personal non-managed devices by employees trying to meet a deadline. Most employees view security as a hindrance to their productivity. In case a way is found to work around security measures, there is a high chance that it will be used.

(Un)intended employee behavior

Employee behavior can have a substantial detrimental impact on the security of mobile business applications. When employees are insufficiently aware of the risks involved, their mobile activities could undermine the technical controls that organizations have in place to protect their data, such as incidents to consider, cases of malicious employees that have terminated, but that still have access to corporate applications on their mobile devices or cases where the mobile device is lost or stolen, etc. ([ISAC10]).

Access by external parties

External parties who have access to the network and systems of the organization are difficult to manage and monitor. Third parties are a known source of significant security breaches and are a target for hackers (as a steppingstone). This introduces more vulnerabilities and an increased risk to the corporate environment. These external parties usually require high privileged access to infrastructure and systems and therefore impose a high impact if those privileges are misused or compromised.

Risks related to the security of mobile business applications

The growing use of mobile devices within organizations has increased the threat level of IT security breaches, misuse of sensitive data and – as a result – reputational damage. Therefore, it is imperative that mobile business applications are subject to periodic audits performed by IT auditors.

In order to effectively perform such an IT audit, it is important to start with a risk assessment and to leverage a control framework designated for mobile business applications. To perform an IT audit effectively, the scope should only include the most relevant actors where organizations are able to influence and mitigate the risks involved. This leads to the following audit objects which are in scope for the designed risk/control framework:

  • User
  • Mobile device
  • Mobile (business) application
  • Corporate network

Considering the objects in scope and the threats identified in the previous section, a risk assessment is performed in which risks are evaluated which are deemed relevant to the subject of mobile business applications. These risks should be controlled by organizations when using mobile business applications as part of their operational processes and will serve as a basis for identifying related mitigating controls. The audit objects, the risks and the controls are combined in a risk/control framework which is included in this article.

Risk/control framework mobile business applications

The risk/control framework has been established by analyzing theory, identifying relevant objects and performing a risk assessment. The interlinkage of the appropriate controls, the risks, the control objectives and the relevant audit objects is depicted in Figure 2. The control objectives are desired conditions for an audit object which, if achieved, minimize the potential that the identified risk will occur. Hence, the control objectives are linked to the identified audit objects and are subject to the risks that threaten the effectiveness of the control objectives. The controls should mitigate the security risks to which mobile business applications are exposed.

C-2020-1-Demoed-02-klein

Figure 2. Establishing the risk/control framework. [Click on the image for a larger image]

Resulting from the aforementioned described process of setting up a risk/control framework, Table 1 shows the risk/control framework that has been established.

C-2020-1-Demoed-t1a-klein

Table 1. Risk/control framework mobile business applications. [Click on the image for a larger image]

C-2020-1-Demoed-t1b-klein

Table 1 (continued). [Click on the image for a larger image]

Control objectives of the risk/control framework explained

The risk/control framework can be used by organizations in order to continuously test and update technical measures related to the usage of mobile business applications in order to safeguard the organization and its sensitive data.

Controls related to the technical setup of the mobile business application and a secure application life cycle management, control objective 1, are vital and need to be taken into account. These controls focus on tracking the risk of an application as it moves through the development lifecycle. Binary hardening, input validation and encryption complement a secure development environment to prevent vulnerabilities and unintended leakage of data from unprotected applications. These controls are even more relevant for mobile business applications as mobile devices are taken everywhere and most likely contain sensitive business data.

Securing the mobile business application by means of sufficient authentication and authorization configuration, control objective 2, 3 and 4, is relevant for the mobile business applications, but is highly dependent on how the IT infrastructure and the mobile environment has been set up. Even though it could be that part of the authentication logic is performed by the back-end server, it is important to consider this as integral part of the mobile architecture. Plainly, authentication and authorization problems are prevalent security vulnerabilities and rank second highest in the OWASP Top 10 ([OWAS17]) and are therefore included in the established risk/control framework.

In order to prevent unauthorized access and unintended leakage of corporate data, it is also vital to incorporate the right technical security measures to protect the mobile device (e.g. anti-virus, patch and vulnerabilities, separate corporate and private environment and data sharing). The controls, control objectives 6, 7 and 10, that are related to these technical measure are a very important part of the risk/control framework: it would trigger organizations to rethink their mobile architecture strategy, to be continuously aware of potential vulnerabilities and to implement and monitor the security measures.

In order to steer organizations in their strategy to control their mobile environment, controls related to MDM solutions are also incorporated. An MDM solution, control objective 8, will provide input to secure, manage and monitor corporate owned devices that access sensitive corporate data.

Cryptographic controls on mobile devices are almost inevitable. As mobile devices may contain sensitive corporate data, a secure manner of protecting data using cryptographic techniques is required; control objective 9. Cryptographic controls will provide added value to organizations for protecting their mobile data.

Mobile devices should be safeguarded from external threats. Therefore, when connecting to the corporate network, stringent security standards should be applied to protect corporate data; control objective 5 and 11. Controls related to these security standards will guide the organization in how to continuously monitor and test connectivity security between mobile devices and the corporate network.

A well-known saying in the world of IT security is that people are the weakest link in cyber security. We would like to reframe that as: people are the most important link in the cyber security chain. The risk/control framework incorporates “soft controls” – control objective 12 and 13 – related to users (people aspect) of mobile devices.

In conclusion, the risk/control framework consists of several sections that relate to:

  • secure development
  • technical setup
  • authentication & authorizations
  • cryptography
  • sessions handling
  • using MDM
  • data security
  • connections with corporate network
  • user awareness and
  • device / asset management.

Relevance of implementing a mobile security risk/control framework

In today’s world full of IT business transformations, it is important to implement controls for securing corporate data on mobile devices. Mobile business applications will continue to gather momentum in the coming years – despite the fact that the technical security level achieved will remain a concern. The risk/control framework as developed can be used in practice as a reference framework. As the mobile environment is rapidly evolving, there is a need to continuously test the security level and control the risks identified. Although there is an abundance of literature detailing specific technical security measures to be configured or implemented, these measures are usually not incorporated in (existing) risk/control frameworks. This framework is keeping abreast of new technology trends.

The aim of this article is to trigger organizations to think about these controls and how they can adjust this framework to make it applicable to their IT (mobile) environment and to incorporate these controls in their existing control framework that are most likely focused on regular ERP systems or IT General Controls. Since businesses and IT information chains have grown more sophisticated and complex, mobile applications become more prevalent at numerous large organizations.

Looking forward

As the Dutch National Cyber Security Centre ([NCSC18]) states in their mobile applications research; organizations are using these applications more and more in their daily business activities. The differences between the backend of mobile phones, tablets, laptops and desktops are diminishing, however. In the near future, the IT auditor needs to focus on end-to-end device and mobile security, which is not yet fully integrated in risk/control or audit frameworks or taught at post-master IT auditing studies.

Conclusion

This article is aimed at showing the need for and providing a risk/control framework. The risk/control framework we developed and validated can be used by organizations to continuously test and update technical measures in and around mobile business applications. This will safeguard their organization and its sensitive data. There is a number of significant controls an organization should have in place in order to securely use mobile business applications. This article demonstrates that several mobile application areas need to be addressed using security controls. These areas are secure development, technical setup, authentication & authorizations, cryptography, sessions handling, using MDM, data security, corporate network connectivity, user awareness and device / asset management. We do emphasize that how organizations control their mobile environment is also dependent on several factors. It is critical to perform an extensive risk assessment that addresses the business as well as technical aspects to identify the IT maturity of the organization, risk appetite and how the mobile infrastructure has been designed and configured. The risk/control framework can be adjusted according to this assessment, by verifying the controls that are applicable to the organization.

References

[Enis17] Enisa (2017). Privacy and data protection in mobile applications. Retrieved on June 22, 2018, from: https://www.enisa.europa.eu/publications/privacy-and-data-protection-in-mobile-applications

[Ham17] Ham, D. van, Iterson, P. van & Galen, R. van (2017). Mobile Landscape Security, Addressing security and privacy challenges in your mobile landscape. KPMG.

[ISAC10] ISACA (2010). Securing Mobile Devices.

[Line07] Lineberry, S. (2007). The Human Element: The Weakest Link in Information Security. Journal of Accountancy, 24(5), 44-46, 49.

[NCSC18] National Cyber Security Center (NCSC-NL) (2018). IT Security Guidelines for Mobile Apps. Den Haag.

[OWAS17] OWASP (2017). OWASP Top Ten. Retrieved on June 2018, from: https://owasp.org/www-project-top-ten/

Coronavirus and Cyber Security

By now it is clear that COVID-19 has a significant impact on countries, organisations and citizens around the world. Each country is implementing its own response, with The Netherlands choosing to minimise social contact (social distancing), work from home as much as possible and close most public establishments. This could very well impact the cyber security position of organisations.

For many people, social distancing is not an easy task, having to juggle home and work responsibilities, exacerbated for many by the fact that schools and daycare centres are also closed. On top of that, organisations need to be aware of the heightened cyber security risks related to remote working in response to COVID-19. We currently see six threats we want organisations to be aware of related to remote working in these times.

Cyber security considerations

CEO fraud exploiting social distancing. CEO fraud involves e-mails or phone calls trying to persuade the receiver to transfer corporate funds to other bank accounts. The requestor claims to be the CEO or other senior company figure being under intense time pressure to get an important payment through. Usually such CEO frauds are detected because the receivers check with their colleagues whether these communications can be trusted. Now that everyone is working from home, we expect these checks to be less solid. Please advise staff with access to corporate bank accounts to keep adhering to the four-eyes principle of money transfers and motivate them to follow the incident management process and escalate irregular communications.

Insecure remote connections to the office. Not all organisations are technically prepared to offer (mass) remote working options. IT staff under time pressure might not acquire and offer the most secure solutions. We highly encourage the use of at least two-factor authentication for access to company data, along with secure and solid cloud solutions for collaboration where possible. For collaboration, several companies are temporarily offering their solutions for free (including Microsoft Teams, LogMeIn Emergency Remote Work Kit, Cisco Webex and more).

Increased personal use of company devices. When working from home, employees may be tempted to use their company equipment (e.g. laptops or phones) for personal purposes. This may increase the risk of these devices being infected with a virus or malware when visiting less secure (personal interest-related) websites. Lately, adverts on such websites in particular have been known to spread malware. We recommend updating company devices automatically, following the advice of the software vendor. We especially recommend updating browsers and related third-party software (e.g. PDF readers, Flash players and JAVA).

Employees under financial stress or job uncertainty may pose a risk as insider threat. With the current economic uncertainty of the COVID-19 measures, employees under financial stress or in danger of losing their jobs might fall victim to the insider threat. Foreign agents or competitors from high-risk countries have been known to exploit such circumstances (e.g. economic uncertainty) when approaching potential victims to e.g. steal key corporate data. We advise to be transparent in your communication, monitor staff well-being closely (remotely), pick-up on ‘cries for help’, and to keep on spotting concerning behavior and meeting that with a solid organizational response (attention, understanding and action). Very good material on the insider threat is the “Critical path to Insider Risk”.

Confidentiality at home. While working from home, not everyone in the surrounding is vetted to hear or see confidential information. Children, spouse or room mates may not be aware of the confidentiality level of information they hear or see. We advise to have your staff work in separate rooms as much as possible, and to request staff to have calls using headsets instead of a speakerphone.

Phishing attempts specifically related to COVID-19. Since mid-February, we have seen, just like colleagues in our global Cyber network, the rapid build-out of infrastructure by cybercriminals used to launch COVID-19 themed spear-phishing attacks and to lure targets to fake websites seeking to collect Office 365 credentials. Examples of campaigns mounted include:

  • COVID-19 themed phishing emails attaching malicious Microsoft documents which exploit a known Microsoft vulnerability to run malicious code
  • COVID-19 themed phishing emails attaching macro-enabled Microsoft Word documents containing health information which trigger the download of Emotet or Trickbot malware
  • Multiple phishing emails luring target users to fake copies of the Centre for Disease Control (CDC) website which solicit user credentials and passwords (or comparable website in other countries)
  • A selection of phony customer advisories purporting to provide customers with updates on service disruption due to COVID-19 and leading to malware download
  • Phishing emails purporting to come from various government Ministries of Health or the World Health Organization directing precautionary measures, again embedding malware
  • COVID-19 tax rebate phishing lures encouraging recipients to browse to a fake website that collects financial and tax information from unsuspecting users.

There are some key steps you should take to reduce the risk to your organization and your employees, particularly as you move to remote working:

  • Raise awareness amongst your team warning them of the heightened risk of COVID-19 themes phishing attacks.
  • Share definitive sources of advice on how to stay safe and provide regular communications on the approach your organization is taking to the COVID-19 pandemic.
  • Make sure you set up strong passwords, and preferably two-factor authentication, for all remote access accounts; particularly for Office 365 access.
  • Provide remote workers with straightforward guidance on how to use remote working solutions including how to make sure they remain secure and tips on the identification of phishing.
  • Ensure that all provided laptops have up-to-date anti-virus and firewall software.
  • Run a helpline or online chat line which they can easily access for advice, or report any security concerns including potential phishing.
  • Encrypt data at rest on laptops used for remote working given the risk of theft.
  • Disable USB drives to avoid the risk of malware, offering employees an alternate way of transferring data such as a collaboration tool.

The risks of open-source software for corporate use

This article analyzes the origin of the open-source software (OSS) movement, how it relates to the ongoing trends in the enterprise and open source worlds, as well as the corresponding risks. Based on these inherent specificities, this article subsequently lays the foundations to control risks related to the use and contribution to open source without reducing its business potential.

Introduction

The acquisition of Red Hat by IBM last year, which follows the acquisition of GitHub by Microsoft in 2018, demonstrates that open-source professional-grade software is no longer a utopia. Moreover, the rise of DevOps in corporate IT and the constant need to shorten the time-to-market for new digital products increased the temptation from product owners and development teams to use freely available code before analysis or validation to improve the delivery of Key Performance Indicators. The question is, is this practice safe for your security and compliance program? Alternatively, if it’s not safe, what controls could be applied to your product team to mitigate the risks? 

In this article, we will first present the origin, rise, and ideology that drives the OSS community. Second, we will explain some pitfalls of corporate open-sourcing both as a code user and at code producer level, followed by some controls and best practices aiming at keeping a healthy open-source ecosystem.

From the FOSS to OSS community, a brief history

The emergence of free software started in the 1970s when Richard Stallman, a staff programmer at the MIT Artificial Intelligence Labs, modified the source code of the department printer to send error notifications when bugs occurred. After the replacement of the printer by a new model, Richard Stallman found the source code wasn’t accessible, so he requested it from the printer company, who refused to give it. Getting quite upset with this situation, he decided to quit his job and to start working on an open software ecosystem called GNU in 1983. The name GNU is a recursive acronym for “GNU’s Not Unix”, mostly chosen as it was a real word and fun to say ([FSF17]). This date marks the creation of the Free Software Movement that later evolved into the Free Software Foundation (FSF). The approach to open source was highly oriented towards an interpretation of software liberty back then, with the famous quote from Stallman: “‘Free software’ is a matter of liberty, not price. You should think of ‘free’ as in ‘free speech,’ not as in ‘free beer.'” ([FSF19]). This origin of the Free & Open-Source Software (FOSS) community still explains today why GNU open source licenses are more restrictive for corporations as they always underlie the release of any source code developed from a GNU licensed code.

In the 1980s, almost all software was proprietary, and one of the main goals of the FSF was to create the first real free operating system, and by the early 1990s, the GNU project had most of the major components of a free operating system, such as a compiler, editor, text formatter, mail software, graphic interface, libraries, etc. Nonetheless, the last missing piece of this ecosystem was a full operating system called Kernel. It’s only in 1991 that Linus Torvald released this missing piece with the open-source project Linux, the complete Linux operating system incorporates many elements from Stallman’s GNU project. While Linux is probably the world’s largest and most successful open-source project in history, it’s perhaps thanks to Torvald’s second-biggest open-source project Git – which aimed to help developer collaboration on the Linux Kernel source code in 2005 – that the open-source world could start to take over the proprietary world. The main reason behind Git’s success in open-source projects is that it didn’t need to be continuously synchronized with a code repository. This specificity enabled a large number of developers to collaborate in a decentralized and asynchronous manner. However, it’s GitHub in 2008 that imposed Git as the standard for open-source collaboration by giving it a web interface and a social dimension overpassing the legacy client-server version control systems such as CVS and SVN.

Ongoing trends

Since then, IT continued to evolve and become more complex. The ease to reuse code and to collaborate brought by sharing public repository platforms such as GitHub, GitLab, or even DockerHub, gave a boost to Information Technology development.

C-2020-1-Bouix-t1-klein

Table 1. Top 5 GitHub enterprise domain name contributors according to the number of active users. [Click on the image for a larger image]

Table 1 is derived from ([Hoff18]) research; it shows the main companies which contributed to open-source projects on GitHub in 2018. We clearly see that large Silicon Valley Tech companies continue to be the driving force. Still, many other companies outside the IT industry started to contribute heavily to the open-source world, such as Walmart, Nike, or Disney. Contributing to open source may first seems as counterintuitive to enable company business growth. Indeed, it goes against the ITIL concept of developing internal software and services to sell them externally. Yet, having your company using and maintaining open-source software may have several advantages, in particular, the following four aspects.

  1. Innovation: By being open-sourced, a lot of software is easier and faster to test and customize to company needs without the constraint of a presale agreement. For this reason, relying on open source software often means being at the edge of innovation.
  2. Community: Some open-source software such as Kubernetes or Prometheus benefits from the support of an active community of developers spanning across borders and organizations. 
  3. Freedom: By adopting an open-source standard, there is a reduced risk of a vendor lock-in as the software will be less subject to an EOL (“End Of Life”) support situation as in a traditional software business model. Moreover, community members are often keen to look for an optimal alternative when a maintainer drops its support.
  4. Brand: The combination of the three previous bullets can improve the image of a company to make it more attractive for newcomers. This is further discussed below.

As stated in the introduction, the above specificities of open-source software allow the development team to test and debug their prototypes faster without losing time and money in the presale agreement. It is, therefore, often a no-brainer in most companies to favor the reliance on open-source components for cost-oriented reasons. What is the additional gain of releasing internal software to the public, however? Some could consider that it may be related to Linus’ law or wisdom of the crowd that would make open software more secure because it is easier to control. It could also mean that security vulnerabilities are easier to spot by malicious parties whose best interest isn’t to fix the code but to keep it vulnerable as long as possible. A notable example of such a malicious third party may be the American National Security Agency itself, which was maintaining a significant database of exploits until they were leaked in 2017 by a hacking group called Shadow Brokers.

While the 21st century is the century of information technologies, the most significant advantage for companies to become open-source maintainers is to cultivate a vibrant technical brand. As already stated earlier, a missed digital transformation may lead to technical debt and a slow market death for the organization. It is, therefore, crucial to be able to attract talents capable of developing new products and leading the innovation path. Nonetheless, there is a lack of such profiles on the market nowadays, resulting in a highly competitive environment for businesses. Given this context, creating a robust technical label by maintaining successful open-source projects and being active in community events (blogs, talks, meetups, conferences, …) is a determining factor. A more recent concept to justify open-sourcing activities that started to appear in the technology sector is promoting open source as a way to give back to society. Open sourcing can, indeed, be seen as a moral obligation to publish code built on the work of others.

Risks related to open source

As discussed in the previous section, open source can bring many benefits to your enterprise. However, in the following section, we will focus on the possible downsides and risks of open source for your company’s business. Risks associated with open source tend to fall into four different categories: technical, governance, legal, and contributing.

On a technical layer, three main risks factors can be identified:

  1. Code vulnerability: While more significant open-source projects may have full-time sponsored developers to track reported issues and bring in new functionalities, some community-driven repositories may be side projects, maintained in best effort mode by a single developer during his or her free time. As a result, security issues and vulnerabilities may remain unaddressed for a more extended period, after being the object of a CVE (Common Vulnerability Exposure), than in a traditional commercial solution where the software vendor took engagements on security Service Level Agreement.
  2. Technology support: It should also be noted that open-source software also mostly comes without enterprise-grade support and sometimes a lack of coding and testing standards. Accordingly, it becomes the responsibility of your organization to provide an additional engineer FTE (Full-Time-Equivalent) for supervision and troubleshooting. For instance, if the code doesn’t completely fit your use case, it will be the responsibility of your engineers to bring the right modifications that make it fit into your product stack, and to find solutions to any bug in the meantime. This reason brought many non-tech focus institutions to rely on third-party consulting companies, which is one of the primary incomes for open-source-based businesses. 
  3. Distribution: Another common security hole of open-source software is the distribution channel. In case the code is published as a binary, most IT teams won’t undertake further verification than checking if the provided hashes match the binary. Yet the binary and hashes often come from the same source and can both be compromised by an attacker. This happened in recent years when several Linux distribution repositories were hacked (Mint, Gentoo, etc.).

The governance layer of an open-source project is often a high-risk factor. In the open source world, there is a high level of trust. Community members rely on individual achievement rather than identity. A member having a sufficient record on a project may sometimes become the primary maintainer while having a Gmail address as the only identification. This situation has been the cause of last years’ greatest hacks, such as the compromise of NPM to steal cryptocurrency credentials ([Clab18]). In this case, only a small library was compromised, but the incident took a broader dimension as many other libraries depended on it, resulting in two million downloads a week. As maintainers sometimes have an unclear identity, the probability of having them impersonated by a malicious party who compromised their account is, consequently, real.

On the legal side, another risk that applies for both code consumers and code publisher companies, which is connected to the risk related to contributing, is the choice of an open-source license. As the FSF was built on a sharp interpretation of the meaning of software liberty, which has been highlighted in the first section, GNU licenses tend to be the most restrictive on what is allowed to do with software.

However, most code based on open-source software often ends up being closed source, making it difficult for code owners to prove it and force a commercial entity to reveal the code. A recent comic example of a licensing issue may be the lawsuit between Ubiquiti and Cambium ([Ging19]), where the two companies are suing each other regarding license violation while they both have been found violating a GNU General Public License for the underlying technology. To summarize, while the legal risks of open source are often more limited than with traditional software, a non-respect of some license specificities may, in some cases, lead to copyright infringement procedures (see box).

As a reminder, Table 2 enumerates the properties of the six most common open-source licenses. As discussed earlier, GNU licenses are the most restrictive as they impose to release the associated source code as well as the list of changes, but you’re allowed to use the GNU trademark in the name of your project. The LGPLv3 license is a bit less restrictive as you can keep your code closed source if you only use the open-source code as a library for your project. The Mozilla license is similar to the LGPL, except you don’t have to state the changes made to the codebase and cannot use the Mozilla trademark. As a result, if you do not intend to commit to the open source community, the best strategy is to target projects licensed under the Apache or MIT license as you don’t have an obligation to disclose the source. Note that the MIT license is a bit more permissive than the Apache License 2.0. It is usually considered as a best practice to go for a less restricted license when unsure, as, the risk of license violation is then decreased. Some mitigations to this risk are proposed in the last section of this publication.

C-2020-1-Bouix-t2-klein

Table 2. The six most common open source licenses. [Click on the image for a larger image]

Regarding the risks associated to contributing to open source, we may observe that the open source business may be divided into two distinguished models. 

  • On the one hand, traditional open core vendors, such as Red Hat, develop a free Core software, such as CentOS, on which they build proprietary extensions bundled with support and services, such as Red Hat Enterprise Linux (RHEL).
  • On the other hand, there are cloud-hosted services built with open source software. They include additional features such as cross-region replication, automated scalability, and monitoring dashboards. A cloud-hosted service may be sold as Software as a Service (SaaS) or Platform as a Service (PaaS) by the Open Core vendor itself, such as MongoDB Atlas or as an offer in a Public Cloud Provider portfolio such as AWS DocumentDB.

Nowadays, both of those business models seem to be challenged by the significant growth of Public Cloud Service Providers this last decade. Public Cloud has played a centralizing role in the way networks, and IT, operate by placing all of the needed IT services with a single provider. It simplifies workflows, billing and decreases the need for service maintenance and support. As a result, most IT teams would choose a less performant managed service provided by one of the major Public Cloud Players (AWS, Azure, GCP, …) rather than going through the trouble of setting up a new contract with a smaller SaaS player. It ended up in a significant financial loss for some open-source companies, such as Elastic, which initiated a trademark lawsuit against AWS, after having intertwined more and more private components in their new releases.

Other open source firms decided to compete by introducing a new type of Public License intending to protect their Intellectual Property from concurrent SaaS offerings. This tactic initiated by MongoDB through the Server Side Public License ([Mong18]) seeks to force the release and open sourcing of a complete SaaS implementation with infrastructure subcomponents as a condition to sell the service, excluding MongoDB as they own the Intellectual Property (IP). Because of this license, the new AWS service DocumentDB is based on the open source code that was released under the previous license two years ago; on a dark note, such a decision may compromise the inter compatibility of MongoDB with any other SaaS service in the future.

To compile this last point:

  1. Open sourcing some internal IT products is good for a company’s image and may improve internal IT processes, and help to find new talents
  2. Open sourcing core business components represent a significant risk of favoring market competitors and hosted offerings no matter the license

Due to the difficulties mentioned above, we consider contributing to open source as a substantial strategic value for modern companies, but with a high risk if relied on for its financial value.

What security safeguards you should put in place

Because of the aforementioned risks, many security and compliance teams still manifest a pessimistic view of the use of open source. The reality is, similarly to the adoption of cloud computing, the use of open-source software is so straightforward and convenient that it is probably already widely used across the organization as Shadow IT. Also, as it goes for any business enabler, prohibiting the use of open source components in the organization is equivalent to ignoring risks related to open source, which results in increased risk exposure for the business. Even if your organization doesn’t have a strong open-source culture, regulating is always a better approach than ignoring. And while you’re working on a regulation for the use of open source, it is good practice to already think ahead about potential future extensions covering the release of open source components. Notably, as we’ve previously discussed, the use of open source components under some licenses underlie the open sourcing of the resulted software.

As for any new internal capability, the use of open source within a company should follow a clear and dedicated open source policy explaining to IT staff members how to use open source and referring to internal processes and procedures. In the absence of standards and central governance, open-source software tends to appear as a grey controlled area from a security point of view. As a result, we recommend keeping in mind a zero-trust model for writing such a document, in the sense that any free code may include major security vulnerabilities or be insufficiently tested. It is then your company’s responsibility to fork and control the repository instead of copy-pasting the original in production, build from source, test the code, and perform minimal due diligence of its maintainers.

As the writing of any policy is always a long and tedious task, I have included a list of five common situations that should be covered:

  • When open source code is used as an internal tool.
  • When open source code is used unmodified in a business product (library).
  • When open source code is modified and used in a business product.
  • The process for contributing to an open-source community during working hours.
  • The process to open source internal products.

This article could go further by giving you additional guidelines, yet the writing of any policy is dependent on the culture and business of an organization. It is better to get the help of an external IT governance specialized third party to write a policy tailored to your needs. As you can imagine, there is hardly a better policy to be made public by an organization than open source. Consequently, you can find some online reference documents from Google, the Linux Foundation, or Gitlab at the following Github repository ([Todo19]) maintained by the TODO (Talk Openly Develop Openly) group.

As a policy without implementation doesn’t go very far, here is a list of probably the most critical security controls to start working on:

  • Centrally monitor which open source project/libraries are used as well as their level of patching and licensing model.
  • The permissions of what is possible per licensing model per project should be identified and listed.
  • A communication channel, for instance, a slack webhook, should be put in place to signal updates from each reference project repository. Furthermore, the responsibility of upgrading and patching open source components should be clearly defined. We recommend the project team, but it could also be a central DevOps team for some organizations. 
  • If an open source project starts to be modified to better fit your internal needs, you should fork the original repository and regularly analyze, merge, and troubleshoot commits from the master. As updates from open source repositories are not necessarily adequately tested and assessed, it is your company’s duty to raise issues or propose alternative commits in case of doubt.
  • As open source repositories may be compromised, always build from source following an automated CI/CD process. This automated build process should include both a static as well as a dynamic vulnerability scanning engine.
  • Monitor the activity, for instance, the number of commits per month, of the open source repositories your organization relies on. If you find that an open source project you use is slowly abandoned, start studying alternatives and prospects within the project community to help your research and decision. Also, try to negotiate with the project owner to take ownership of the repository in case no alternative fits your use case. In all circumstances, never continue using a no longer maintained repository for an extended period!
  • Lastly, before reaching the previous situation, establish privileged relations with the core developers of the open source software you rely the most onSet up a small open source give-back budget, and regularly donate to the external contributors who make your company live.

Figure 1 is an example of how some of those controls can be mapped to address the previously identified risks.

C-2020-1-Bouix-01-klein

Figure 1. Use of FOSS, Risks-Controls mapping. [Click on the image for a larger image]

Conclusion

From a marginal phenomenon in the pre-Internet era, open source moved to the norm for boosting and promoting the world’s most innovative software projects. As enterprise IT becomes more code-oriented, processes to create, use, store, and share code internally and externally became an important area of control. In this article, we specifically focused on the reasons that could help justify the use and contribution to open-source projects within your organization, followed by the presentation of some controls to help limit the risks we previously identified related to the use of open-source software.

Due to the fact that open source projects do not generate profits following a traditional license selling business model, they tend to be intrinsically non-profit, which underlie additional risks. Those risks can be technical with the vulnerability of the source code and the lack of vendor support, related to the weak governance of an open-source project, or even legal with possible license violation. Because of those potential intrinsic risks, the adoption of open source in the enterprise should be driven following a zero-trust model where relied external components shouldn’t be trusted, but regularly analyzed and assessed at each new update.

At the end of the day, a successfully driven open-source compliance program is key to supporting your company’s capability to innovate, develop your strategic market value, and achieve a successful digital transformation.

References

[Clab18] Claburn, T. (2018, November 26). Check your repos… Crypto-coin-stealing code sneaks into fairly popular NPM lib (2m downloads per week). The Register. Retrieved from: https://www.theregister.co.uk/2018/11/26/npm_repo_bitcoin_stealer/

[FSF17] FSF Inc. (2017). Overview of the GNU System. Retrieved from: https://www.gnu.org/gnu/gnu-history.en.html

[FSF19] FSF Inc. (2019). What is free software? Retrieved from: https://www.gnu.org/philosophy/free-sw.en.html

[Ging19] Gingerich, D. (2019, October 2). When companies use the GPL against each other, our community loses. Retrieved from: https://sfconservancy.org/blog/2019/oct/02/cambium-ubiquiti-gpl-violations/

[Hoff18] Hoffa, F. (2018). Who contributed the most to open source in 2017 and 2018? Let’s analyse GitHub’s data and find out. Retrieved from: https://medium.com/@hoffa/the-top-contributors-to-github-2017-be98ab854e87

[Mong18] MongoDB, Inc. (2018). Server Side Public License. Version 1, October 16, 2018. Retrieved from: https://www.mongodb.com/licensing/server-side-public-license

[Todo19] Todogroup (2019). Open Source Policy Examples and Templates. Retrieved from: https://github.com/todogroup/policies

Vertrouw je op beveiliging of beveilig je vertrouwen?

Bedrijven, overheden, werkgevers en burgers hebben er allemaal baat bij dat we op IT-systemen kunnen vertrouwen voor administratie, financiële transacties, communicatie en vermaak. Steeds vaker wordt echter duidelijk dat IT-systemen niet zo veilig zijn als we denken. Cybercriminelen kunnen meekijken en zo gegevens buitmaken. Niet elk systeem is dus even veilig, even goed te vertrouwen. Waarom hebben we dan toch zo veel vertrouwen in IT-systemen? En hoe kun je weten wat wel en niet veilig is? Je bewust zijn van de risico’s is op zichzelf niet voldoende. Dit artikel verkent de aard van vertrouwen in IT-systemen en manieren om daar gezond en cyberveilig mee om te gaan.

Inleiding

Veel stakeholders hebben belang bij het laagdrempelig en toegankelijk houden van het internet en de digitale omgeving. Bedrijven hebben er economisch baat bij wanneer het vertrouwen van mensen in technologie en de veiligheid ervan groot is. Burgers hebben er baat bij wanneer ze snel en makkelijk online kunnen shoppen, internetbankieren of andere zaken regelen. Naarmate de samenleving, (kritieke) infrastructuren en bedrijvigheid echter meer afhankelijk worden van IT-systemen, worden ook de risico’s groter. Denk bijvoorbeeld aan digitale criminaliteit of bedrijfsspionage: ransomware, DDOS-aanvallen of social engineering.

De baten en kosten van vertrouwen zijn niet gelijk verdeeld. Sociale netwerken, webwinkels en softwareontwikkelaars zijn typisch gebaat bij veel vertrouwen, om zo de drempel voor het delen, kopen en gebruiken ervan laag te houden. Hoewel gebruikers van digitale platforms of software (zowel individuen als bedrijven) ook baten hebben, zoals het gemak van internetbankieren, komen daar ook veel nadelige consequenties bij. Denk bijvoorbeeld aan identiteitsfraude op basis van gegevens die van sociale media of gehackte websites worden geplukt, en vervolgens aanleiding zijn voor illegale praktijken. Bovendien zijn het, door de interconnectiviteit, in toenemende mate derden (zowel bedrijven als individuen) die problemen ondervinden als gevolg van risico’s op andere plekken. Zo zijn het bijvoorbeeld slachtoffers zelf, maar ook banken, verhuurders en anderen die de kosten dragen van te open gebruik van sociale media.

De oplossing hiervoor is niet eenduidig en zeker niet eenvoudig. In ieder geval zou een beetje gezond wantrouwen in IT, of een slimmere taakverdeling tussen mens en IT om systemen te beveiligen, op zijn plaats zijn.

Het is echter niet wenselijk dat mensen wantrouwig worden ten opzichte van IT. Beter zou zijn dat mensen er niet blind op vertrouwen, in staat zijn de risico’s effectief in te schatten en vervolgens verstandig te kunnen handelen. Helaas kunnen mensen niet bij iedere beslissing bewust nadenken, alle opties overwegen, risico’s afwegen, et cetera. Enerzijds omdat we simpelweg niet de tijd hebben om alles wat we dagelijks tegenkomen grondig te overdenken, anderzijds doordat de meeste mensen niet voldoende kennis van IT hebben om het risico, de consequenties ervan en mogelijke mitigatiestrategieën te beoordelen. De behoefte is dus tweeledig:

  1. dat mensen zich bewuster zijn van de risico’s en dat ze niet zomaar alles moeten vertrouwen (lees: een gezond wantrouwen in IT hebben);
  2. omdat bewustzijn alleen niet het gewenste resultaat zal geven, is het nodig dat ondersteuning voor handen is om mensen te helpen risico’s, consequenties en alternatieven in te schatten en daarin keuzes te maken.

Dit alles moet op een manier gebeuren waarbij mensen gewoon kunnen blijven doen wat ze willen: e-mailen, online werken, foto’s posten, surfen op het web, streamen, enzovoort.

In dit artikel bespreken wij ten eerste kort het verdelen van taken tussen mens en machine. Centraal daarin staat een analyse van het vertrouwen dat mensen hebben in IT-systemen. Vervolgens bespreken wij hoe onveilig cybergedrag tot stand komt, en waarom blind vertrouwen het vaak wint van cyberveiligheid. Ten slotte bespreken wij hoe een gezond wantrouwen in IT een handje kan helpen om de inschatting van risico’s en alternatieven te verbeteren.

Verdeling van taken

Hoe goed IT-systemen ook technisch beveiligd zijn, incidenten zijn niet uit te sluiten. Ten eerste omdat criminelen, hackers, enzovoorts zullen blijven zoeken naar gaten in de beveiliging, en ten tweede omdat cybercriminelen juist door de toegenomen technische beveiliging van IT hun aandacht lijken te verleggen naar de kwetsbaarheid van gebruikers van de systemen die ze willen compromitteren ([SECU12]). Gebruikers van software maken zichzelf van tijd tot tijd kwetsbaar door onveilige sites te bezoeken, mazen te zoeken om gemakkelijker hun doelen te bereiken, te verzuimen hun software te updaten, of zich direct of indirect te laten overreden om gevoelige, persoonlijke informatie te delen.

IT-systemen hebben geen eigen wil, en onder gelijkblijvende omstandigheden voeren IT-systemen iedere keer hun taken op precies dezelfde wijze uit. Mensen zijn grilliger: ze zijn snel afgeleid, weleens vermoeid en nemen zowel bewuste als onbewuste beslissingen. Dit leidt ertoe dat volgens sommige bronnen bij 95% van de beveiligingsincidenten menselijk handelen (een deel van) de oorzaak van het incident is ([IBM15]).

Mensen hebben echter ook hun sterke kanten. Ze zijn in staat creatief en kritisch na te denken, en kunnen daarom complexe informatie flexibel en dynamisch interpreteren ([Kame13]). Denk bijvoorbeeld aan het beoordelen van de veiligheid van een emailbijlage. Een IT-systeem kan zoeken naar kenmerken die wijzen op de betrouwbaarheid en veiligheid van een bijlage. Op basis daarvan kan het systeem een inschatting maken van het risico dat je loopt wanneer je deze opent, bijvoorbeeld wanneer er een virus in een bijlage lijkt schuil te gaan. Een mens is echter beter in staat om te bepalen wanneer er bij een ogenschijnlijk risico eigenlijk niets aan de hand is, bijvoorbeeld omdat je de bijlage verwachtte. Hiertoe is een IT-systeem op dit moment nog niet in staat.

Verschillen tussen mensen en IT-systemen hebben consequenties voor de manier waarop je deze inzet bij het inschatten van de betrouwbaarheid van een IT-systeem, en vervolgens de risico’s in de cyberketen verkleint. IT-systemen kunnen mensen helpen en ondersteunen, maar (nog) geen complexe informatie interpreteren, wat nodig is om een accuraat betrouwbaarheidsoordeel te vormen, zoals in het bijlagevoorbeeld van de vorige alinea. Om deze reden zijn mensen nog steeds onmisbaar in dit proces. Bovendien is het onder andere de creativiteit van mensen die juist op unieke wijze kan bijdragen aan onderdelen van het beveiligingsproces. Denk dan aan het kiezen van creatieve, persoonlijke en complexe wachtwoorden, die je ook nog kunt onthouden: dat is moeilijk voor een machine (met name dat een complex wachtwoord ook onthouden kan worden), maar met een beetje creativiteit is dit relatief gemakkelijk voor een mens.

Vertrouwen 

Vertrouwen is geen eenzijdig concept, maar kent verschillende facetten. [Paul17] identificeert drie concepten die invloed hebben op het vertrouwen in IT. Deze worden in figuur 1 weergegeven. Ten eerste: vertrouwensovertuigingen waarbij de gebruiker erin gelooft dat de ander – in dit geval een IT-systeem – positieve kenmerken heeft, zoals goedaardigheid, integriteit en competentie. Ten tweede gaat het om de vertrouwensintenties. Dit is de bereidheid om je afhankelijk of kwetsbaar op te stellen ten opzichte van het IT-systeem. Ten derde: vertrouwensgedrag, oftewel acties die overeenkomen met vertrouwensintenties, en die blijk geven van een afhankelijke relatie met de ander. Deze drie concepten zijn van toepassing op zowel de interactie tussen mensen onderling als de interactie tussen mens en machine.

C-2018-3-Young-01-klein

Figuur 1. Relatie tussen drie soorten ‘vertrouwen’, volgens [Paul17]. [Klik op de afbeelding voor een grotere afbeelding]

Wanneer men herhalend ervaart dat het gebruik van IT-systemen tot de gewenste uitkomsten leidt en geen negatieve consequenties heeft, kan dit leiden tot een sterke koppeling tussen de IT en het vertrouwen daarin. Er ontstaat dan gewoontematig vertrouwen ([Pien16]), oftewel automatisch en onbewust vertrouwen in IT als een soort gewoonte.

Een voorbeeld hiervan is het gebruik van internetbankieren. Je gelooft erin dat je bank het beste met je voor heeft, en tevens haar eigen systemen en data veilig wil houden. De systemen die de bank heeft ontwikkeld voor internetbankieren zijn dus veilig, en ze zullen jouw gegevens nooit misbruiken of aan een ander prijsgeven (vertrouwensovertuiging). Je bent vervolgens bereid om naar dit vertrouwen te handelen en te internetbankieren (vertrouwensintentie). Je downloadt de app en neemt de sprong: een internetaccount, alle rekeningen zichtbaar, en met een vingerafdruk inloggen (vertrouwensgedrag).

Vertrouwen wordt verder versterkt door zogenaamd institutioneel vertrouwen ([Paul17]). Hierbij gaat het om het vertrouwen in de structuren en situationele factoren in onze omgeving die het vertrouwen bevorderen. In het kader van IT kun je denken aan technologische bescherming, contracten en wetgeving die bestaan om de gebruiker te beschermen tegen cyberdreigingen en -criminaliteit: mensen hebben meer vertrouwen in IT omdat deze beschermmaatregelen bestaan. Zo is de consument bijvoorbeeld beschermd bij internetshoppen, dankzij wetgeving dat een internetwinkel goederen verplicht moet terugnemen en geld restitueren, inclusief verzendkosten ([Quis18]).

Mens of machine

Er is een belangrijk verschil tussen het vertrouwen in machines en vertrouwen in mensen. Vertrouwen in mensen heeft vaak een emotionele component, zoals wanneer iemand vertrouwt op zijn beste vrienden. Vertrouwen in IT heeft echter meer het karakter van een transactie, waarbij het gaat om verwachtingen van het nut dat wordt waargemaakt ([HSBC17]): als ik een document in een tekstverwerker opsla (mijn actie), vertrouw ik erop dat het systeem daadwerkelijk mijn aanpassingen zal bewaren (actie van het IT-systeem). Wat voor mensen mogelijk onvoldoende op de voorgrond staat, is dat wanneer ze vertrouwen hebben in IT-systemen, er achter dat IT-systeem ook mensen en organisaties schuil kunnen gaan met eigen agenda’s: bonafide en malafide. Dit betekent dat de vertrouwensovertuigingen niet toereikend zijn om tot accurate vertrouwensintenties en -gedrag te komen. Met andere woorden: mensen beoordelen IT-systemen met name op hun betrouwbaarheid van nut, terwijl intermenselijke, emotionele betrouwbaarheid ook mee zou moeten spelen; als ik mijn document opsla, houd ik er geen rekening mee dat het systeem wellicht door iemand is gehakt die een grapje uit wil halen.

Consequenties

Het wel of niet hebben van vertrouwen in IT-systemen is medebepalend voor hoe wij met die systemen omgaan, oftewel ons cybergedrag. Cybergedrag kan veilig zijn (bijvoorbeeld alleen vertrouwde Wi-Fi-netwerken gebruiken) of onveilig (bijvoorbeeld zwakke wachtwoorden kiezen). Wat de zaak rondom vertrouwen verder compliceert, is dat veel negatieve consequenties van onveilig cybergedrag abstract en ambigu zijn; de gevolgen van een virus kunnen onduidelijk zijn, mensen kunnen zelfs niet eens van de aanwezigheid ervan op de hoogte zijn. Zo is het voor veel mensen ook niet duidelijk wat de consequenties kunnen zijn van diefstal van persoonlijke data, of het gebruik van deze data door sociale media. Ons cybergedrag kan door bedrijven en overheden gemonitord of gewijzigd worden, maar kwaadwillenden kunnen dit eveneens. Bij andere risico’s is de relatie tussen het gedrag en de gevolgen ervan vaak zelfs niet zichtbaar, zoals wanneer een ‘achterdeur’ van een website gebruikt wordt om inloggegevens te achterhalen. Het gevolg is dat men dan geen negatieve consequenties opmerkt of ervaart van het vertrouwensgedrag, en zo nog minder alert wordt op veiligheidsrisico’s. 

Mensen bezitten bovendien vaak onvoldoende kennis om malafide websites en software te herkennen en zich ertegen te wapenen. Dit vermindert eens te meer de motivatie om te investeren in een kritische evaluatie van het eigen cybergedrag. Zelfs wanneer mensen wel weten wat de gevolgen van hun gedrag kunnen zijn, leggen abstracte handelingen met gevolgen op een langere termijn, zoals een veilige IT-omgeving creëren en behouden, het meestal af tegen het nastreven van concrete handelingen met een positieve beloning op de korte termijn, zoals het versturen van een e-mail via een onbeveiligd Wi-Fi-netwerk. Met andere woorden: kortetermijndoelen zijn vaak de belangrijkste redenen voor mensen om in eerste instantie online te gaan of een programma te gebruiken, en deze hebben dus meer prioriteit dan doelen op de lange termijn, zoals cyberveilig handelen. 

Gewoontes doorbreken

In het relatief jonge leven van IT is er veel nadruk gelegd op de positieve kanten van technologie, automatisering en internet. Het gebruik ervan wordt al vanaf het begin gestimuleerd, door mensen bewust te maken van de voordelen, en hen deze ook te laten ervaren. Negatieve effecten en risico’s lijken minder aandacht te hebben gekregen. IT en internet, plus de associatie daarvan met het nastreven van doelen (bijvoorbeeld online winkelen, sociale contacten onderhouden, online werken, communiceren, websites bezoeken) staan veiligheidsintenties in de weg die focussen op het voorkomen van problemen. Het gevolg hiervan is dat ons cybergedrag een gewoonte geworden is, sterk gebaseerd op vertrouwen, waarin cyberveiligheid slechts een kleine rol speelt. Deze gewoonte moet eerst doorbroken worden, om plaats te maken voor een nieuwe gewoonte waarin veiligheid een groter onderdeel is. 

Kortom

Samengenomen zorgen positieve ervaringen met IT voor vertrouwen in de toepassingen ervan, zonder een gezond wantrouwen jegens de personen en organisaties achter de IT-systemen. Dit wordt versterkt doordat negatieve gevolgen vaak op de lange termijn spelen, moeilijk te herkennen zijn, geen duidelijk verband houden met specifiek gedrag, abstract van aard zijn, en het voor de leek onduidelijk is hoe hij/zij zich ertegen kan verweren. Dit maakt het voor gebruikers van een IT-systeem weinig aantrekkelijk om actief in te zetten op het creëren en behouden van een veilige omgeving. Om mensen te motiveren kritischer om te gaan met hun cyberomgeving, en cybergedrag veiliger te maken, zul je positieve ervaringen en een zekere mate van voorwaardelijk vertrouwen in IT-systemen aan moeten vullen met een gezond wantrouwen. Daarmee kunnen mensen cyberrisico’s beter herkennen en – nog belangrijker – ernaar handelen.

Doelen

Zoals gezegd leggen abstracte belangen of doelen op de lange termijn (cyberveiligheid) het dus meestal af tegen concrete beloningen op de korte termijn (snel dat ene fotootje op Facebook posten). Mensen hebben een concreet doel voor ogen, waar ze prioriteit aan geven boven andere belangen – en ze hebben er veel voor over om dat doel te halen. Of het nu gaat om snel inloggen, een foto van je vakantiebestemming versturen, een rapport op tijd inleveren of je telefoon opladen; als je belemmerd wordt in het behalen van je doel, zul je zoeken naar omwegen om toch in je doelbehoefte te voorzien. Je kiest dan een makkelijk in te typen wachtwoord, gebruikt de lokale, openbare Wi-Fi-netwerken, je verstuurt het rapport via je privémail, of je laadt je telefoon op aan de laptop van je maat. Of de oplossing die je kiest veilig is – of de IT te vertrouwen is – is even minder van belang als het moment daar is, en er geen voor de hand liggend alternatief beschikbaar is.

Als je cybergedrag veiliger wilt maken en het vertrouwen in IT wilt vergroten, zul je mechanismen moeten aanwenden die cyberveilig gedrag bevorderen, én het nastreven van een doel ondersteunen, in plaats van het in de weg te staan. Je gebruikt in dat geval multifactor-authenticatie (bijvoorbeeld een wachtwoord en een code via een sms om in te loggen), je stuurt de foto via het 4G-netwerk, je versleutelt het rapport voor het versturen, je gebruikt een ‘datablocker’ om je telefoon op te laden (dit is een soort USB-sluis die alleen stroom doorlaat, en geen data).

Een gezond wantrouwen

Uiteindelijk is het wenselijk dat mensen verstandig omgaan met IT en de daarmee gepaard gaande risico’s, dus dat mensen niet onverstandig of onveilig handelen. Figuur 2 presenteert vier belemmeringen van cyberveilig gedrag. Deze staan niet los van elkaar, maar volgen elkaar op en versterken elkaar.

C-2018-3-Young-02-klein

Figuur 2. Belemmeringen van cyberveilig gedrag. [Klik op de afbeelding voor een grotere afbeelding]

Een ander voorbeeld is het downloaden van nieuwe software. Je computer geeft dan een seintje dat de software mogelijk schade kan toebrengen. Hoewel veel mensen weten dat dit een risico is, hebben ze vaak te weinig kennis om in te schatten wat dit nu precies betekent, of het in dit geval van toepassing is, wat de alternatieven zijn en welk alternatief op dat moment het beste is. Om dat te achterhalen, zouden ze na moeten denken over hoe ze de benodigde informatie kunnen vinden, zich moeten verdiepen in de relevante risico’s, consequenties en alternatieven, een mening erover vormen en ten slotte een weloverwogen keuze maken. Dit frustreert het behalen van hun doel, namelijk het downloaden van de software, en doet een ongewenst beroep op schaarse mentale middelen. Wanneer de software echt nodig of gewenst is, krijgt het downloaden prioriteit boven cyberveiligheid, en zal dit analyseproces overgeslagen worden; het risico wordt voor lief genomen en de software gewoon gedownload.

IT-systemen informeren de eindgebruiker vaak slechts met de mededeling dat er risico’s gepaard kunnen gaan met het openen van een bestand of bezoeken van een website, niet hoe de gebruiker kan zien of dit bestand of deze website problematisch is. Leken-eindgebruikers ontbreekt het aan kennis en kunde om a priori een mening te vormen over hoe ze de risico’s moeten afwegen. Hoe kan ik weten of er nu een dreiging is? Hoe kan ik beoordelen of downloads van deze site te vertrouwen zijn of niet? Hoe kan ik weten of deze USB-stick gevaarlijk is?

Het is niet zo dat mensen zich niet bewust zijn van de aanwezigheid van risico’s; veel van ons weten dat allerlei digitale handelingen risico’s met zich meebrengen, soms ook zelfs welke specifieke risico’s ([Bark17]). Het bewustzijn alleen is gewoon niet voldoende om een gedragsverandering te bewerkstelligen ([Aite12], [McGr12], [Schn13]). Het ontbreekt ons echter aan manieren om de vertrouwenswaardigheid van iets te beoordelen. Alleen een waarschuwing dat een download een gevaar met zich mee kan brengen is niet informatief: dat weet men al. De vraag is: hoe kan iemand weten of deze download wel of niet veilig is?

Hoe dan?

Uiteindelijk wil je dit proces doorbreken, een gezond cyberwantrouwen creëren en mensen in staat stellen de risico’s en alternatieven op waarde te schatten. Mensen hebben hulp nodig waar de technische signalering van risico’s ophoudt, om zo zelf een mening te kunnen vormen over de vertrouwenswaardigheid van IT; hetzij van software, een bestand, device, netwerk of website.

Mensen kritischer maken, en daarmee blind vertrouwen te lijf gaan, kan alleen als mensen de middelen in handen hebben waarmee ze de vertrouwenswaardigheid kunnen beoordelen. Figuur 3 geeft een aantal voorbeelden van hulpmiddelen.

C-2018-3-Young-03-klein

Figuur 3. Mogelijke hulpmiddelen om de vertrouwenswaardigheid van een IT-systeem te beoordelen. [Klik op de afbeelding voor een grotere afbeelding]

Dergelijke hulpmiddelen zijn niet per se gericht op het veranderen van een blind vertrouwen in IT. Ze helpen echter wel degelijk om veilig cybergedrag te vergroten en de gevaren van blind vertrouwen in IT te lijf te gaan. Door ervaring op te bouwen met wat wel en niet veilig is, en hoe met risico’s om te gaan, wordt misplaatst vertrouwen in IT getemperd.

Wasstraten voor URL’s en bestanden bestaan al, zowel tegen betaling (bijvoorbeeld Microsoft ATP Safe Links of Norton Safe Web) als gratis (bijvoorbeeld URL Void of Scan URL). Dergelijke programma’s ondersteunen bij de beoordeling of een website bonafide is of niet. Voor bijlagen zijn er vergelijkbare oplossingen, zoals Virustotal.com. Verder kun je bijvoorbeeld ook klantbeoordelingen controleren op echtheid, dankzij sites zoals Review Skeptic (zie ook [Böhm17]).

De grote vraag is hoe je ervoor moet zorgen dat mensen daadwerkelijk ook de tijd nemen om dergelijke middelen te zoeken, bekijken en de adviezen toe te passen. Duurzame verandering zal moeten gebeuren via het afbreken van oude, onveilige gewoontes en het opbouwen van nieuwe, veilige gewoontes. Dit heeft tijd nodig, maar wordt ondersteund wanneer het gebruik van de tips en tools zo makkelijk en toegankelijk mogelijk wordt gemaakt. Geef dus niet alleen een waarschuwing dat een bijlage misschien gevaarlijk kan zijn, maar geef tegelijkertijd ook een lijst van kenmerken van gevaarlijke bijlagen (of nog beter: een link naar de digitale wasstraat).

Conclusie

Mensen willen kunnen vertrouwen op IT. Gezien de technologische verzadiging van onze maatschappij is dit ook goed, maar het moet nooit om blind vertrouwen gaan. Mensen kritisch laten nadenken over de mate waarin ze IT-systemen kunnen – of moeten – vertrouwen begint met hen bewust te maken van de risico’s, consequenties en alternatieven. Om gebruikers vervolgens ook veilig en met gezond wantrouwen te laten handelen, vereist meer dan alleen bewustzijn.

In de weg staat dat:

  1. veel mensen niet de juiste kennis bezitten om de vertrouwenswaardigheid van een IT-systeem in te schatten;
  2. mensen bij het gebruik meer na moeten denken dan ze willen;
  3. dergelijk denkwerk mensen frustreert bij het halen van hun doel.

Een belangrijk onderdeel van de oplossing is dat, naast het verhogen van het bewustzijn, er ook hulpmiddelen moeten zijn om afwegingen en keuzes transparanter, makkelijker en dus hopelijk beter te maken. Deze middelen moeten vervolgens bekend worden bij de mensen. Ze moeten ook makkelijk en snel te gebruiken zijn, en op zichzelf te vertrouwen (doordat ze bijvoorbeeld van een betrouwbare bron komen, zoals een overheidsinstantie).

Vertrouwen in IT is noodzakelijk, maar kan cyberveiligheid in de weg zitten. Om hier een goede balans in te vinden, moet vertrouwen gepaard gaan met een kritische houding en de middelen om een weloverwogen inschatting te maken van het risico en alternatief. Bewustzijn alleen is niet afdoende. Gelukkig zijn er veel manieren om mensen hiermee te helpen, zodat ze niet hoeven te kiezen tussen vertrouwen en veiligheid.

Disclaimer

De websites en tools genoemd in dit artikel zijn bedoeld als voorbeelden en ‘used (good) practices’. Ze vormen geenszins een organisatieadvies voor aanschaf c.q. gebruik.

Icoontjes

De icoontjes in Figuur 1 zijn gemaakt door Smartline van www.flaticon.com. De icoontjes in Figuur 3 zijn gemaakt door Freepik en Becris van www.flaticon.com.

Literatuur

[Aite12] D. Aitel, Why you shouldn’t train employees for security awareness, CSO Online, http://www.csoonline.com/article/2131941/security-awareness/why-you-shouldn-t-train-employees-for-security-awareness.html, 2012.

[Bark17] Jessica Barker, The Human Nature of Cyber Security, Cyber.uk, http://cyber.uk/humancyber/, 2017.

[Böhm17] Iris Böhm, Online reviews: wat zijn de regels en hoe herken je neprecensies?, Radar, https://radar.avrotros.nl/columns/detail/online-reviews-wat-zijn-de-regels-en-hoe-herken-je-neprecensies/, 2017.

[eKom18] eKomi, http://www.ekomi.nl/nl/, 2018.

[HSBC17] HSBC, Trust in technology, HSCB.com, https://www.hsbc.com/trust-in-technology-report, 2017.

[IBM15] IBM, IBM Cyber Intelligence Index; analysis of cyber-attack and incident data from IBM’s worldwide security services operations, 2015.

[Kame13] Anya Kamenetz, The Four Things People Can Still Do Better Than Computers, FastCompany, https://www.fastcompany.com/3014448/the-four-things-people-can-still-do-better-than-computers, 2013.

[McGr12] G. McGraw en S. Migues, Data supports need for security awareness training despite naysayers, Search Security, http://searchsecurity.techtarget.com/news/2240162630/Data-supports-need-for-awareness-training-despite-naysayers, 2012.

[Paul17] S. Paul en C. Roi, The Role of Trust in an Information Technology Milieu: An Overview, Canadian Journal of Applied Science and Technology, 2017/5.

[Pien16] D. Pienta, H. Sun en J.B. Thatcher, Habitual and Misplaced Trust: The Role of the Dark Side of Trust Between Individual Users and Cybersecurity Systems, 37th International Conference on Information Systems, Dublin, 2016.

[Quis18] B. Quist, De regels van online winkelen, Consumentenbond, https://www.consumentenbond.nl/online-kopen/regels-online-shoppen, 2018.

[QUTT18] Quttera Labs, Data Feed, Quttera.com, https://quttera.com/lists/malicious, 2018.

[ReSk13] ReviewSkeptic, http://reviewskeptic.com/, 2013.

[Schn13] B. Schneider, Security Awareness Training, Schneider on Security, https://www.schneier.com/blog/archives/2013/03/security_awaren_1.html, 2013.

[SECU12] Security.nl, Chrome dwingt cyberboef tot social engineering (interview), Security.nl, https://www.security.nl/posting/37063/Chrome+dwingt+cyberboef+tot+social+engineering+(interview), 2012.

[TRUS18] TrustPilot, https://nl.trustpilot.com/, 2018.

[Veen14] N. Veenstra, Hoe controleer je de betrouwbaarheid van een website?, Letterzaken, https://optimusonline.nl/betrouwbaarheid-website-controleren/, 2018.

Autonomous driving cars are key for mobility in 2030

Due to mind-blowing technological developments in the automotive industry and changes in customer behaviour, mobility in 2030 will be dramatically different. The autonomous driving car enables highly optimised transportation and asset utilisation. Autonomous driving depends on decision making without human intervention, which is made possible by algorithms and Artificial Intelligence (AI). These algorithms and AI enable direct and automated access to mobility platforms for logistic planning and real time interaction with objects and vehicles. As a result, new business models will arise and fulfil new and sometimes untapped customer needs. The how we move is more and more facilitated by algorithms, based on our previous or expected behavior, external triggers and/or events. The actual working of these algorithms should not be a black box and therefore assurance on algorithms will be an item of interest in the coming years.

Introduction

The automotive industry is at the forefront of a radical digital transformation. The introduction of digital technologies impacts the industry in many different ways. Three forces will fundamentally transform how we move people and goods in the future, as can be seen in Figure 1 below.

  • Firstly, electrification: basically all conventional power sources will be replaced by electric powertrain technologies that will enable sustainable transport with zero emissions. Battery capacity will rapidly become more affordable and various alternatives for the conventional lithium-ion batteries are being introduced.
  • Secondly, the rise of ‘mobility-as-a-solution‘ services that includes the mind shift from ‘ownership’ to ‘usage’ of the car. Platform-based business models are optimising underutilisation amongst car owners, connecting the various forms of transport and offering a one-stop-shopping concept.
  • Thirdly, and most importantly, is the technological breakthrough of autonomous driving cars. Currently, many new car models are being introduced with these new driving features. Initially, incorporated into Advanced Driver Assistance Systems that provide ‘autopilot’ functionalities, but eventually with an increasing level of autonomy.

C-2017-4-Groen-01-klein

Figure 1. Overview of mobility ecosystem changes ([KPMGUK17]).

The future mobility ecosystem is very different

The mobility ecosystem will look very different in the future, but why? The current ‘one-car-one user’ is very inefficient and costly as most of the time a car is not used. The total fleet of passenger cars in the Netherlands consists of approximately 8,3 million vehicles ([BOVA17]). Most of them (approximately 95% according to [Morr16] sit idle each day and are parked. People spend a lot of their time in traffic jams ([ANWB17]). The increasing urbanisation will increase the time spent in traffic. In addition, the current TCO (Total Cost of Ownership) for cars is an important component of the total household budget. In summary, the current one-car-one-user model results in underutilised assets. Therefore, the current model makes the car an expensive form of transportation.

The autonomous vehicle (AV) might be the main reason that the ownership model will shift from one-car-one-user to one-car-multiple-users. These new user communities do not care who owns the car, but see the transportation from A to B as a service. Therefore, the current car ownership model will evolve to a ‘mobility-as-a-service’ model. Mobility subscriptions to car share schemes provide consumers with more convenience, clear costs and fixed rates and kilometre packages. In addition, it solves looking for a parking place in front of your house in a fully congested city! As a driving licence might no longer be needed for these new vehicles, the number of potential users is almost unlimited. New user groups are being tapped. People who were not able to use a car for transportation, now have the opportunity to use autonomous cars as a passenger as they do not need to be able to drive the car itself. Therefore, the growth of travelled kilometres per person will primarily take place in the 0-30 years old segment and the 50 and older segment ([KPMGUK17]). The AV will provide more independence for these segments. However, as these new solutions require high volumes of users and a large infrastructure (assets, technology, connectivity and data analytics), a consolidation of mobility service providers is likely. Service aggregators are a single point of contact for consumers that provide the solution to integrate the required components for autonomous driving. In summary, the autonomous car will enable:

  • lower costs per kilometre (no driver costs, new energy sources, benefits from economies of scale and longer vehicle lifetime);
  • more efficiently used vehicles (up to five times the number of kilometres compared to human operated cars);
  • tapping new markets with potential users who previously did not own a car, or were not able to use a car for transportation;
  • major ecosystems to consolidate (therefore reduce in number) and the type of ecosystem player will change as new industries are entering the market.
Autonomous driving will reinvent the value chain, new platform based business models will be introduced

Next to a transportation object, the autonomous car is also more and more a platform that enables extra digital revenues, which will become a very important source of income for the automotive industry. In the Global Automotive Executive Survey of 2017 [KPMG17], 85 percent of the nearly thousand car executives from 42 countries surveyed were convinced that their company will realize a higher turnover through a digital ecosystem (up to ten times). In this ecosystem, payments might become a very important new activity to enable automated interactions between cars and objects, for example to pay for charging the battery at a public charging point. To be successful, companies in the mobility industry will change. Platform organisations are the ones that will actually change the automotive industry. These organisations are mainly driven by data and algorithms. As can be seen in Figure 2 below, the rules of the game for platform business models are significantly different.

C-2017-4-Groen-02-klein

Figure 2. How to be successful with platform business models ([KPMGNL17-1], [Alst16], [Chou15], [Evan16], [Tiwa13]).

The autonomous driving car is coming, developments are accelerating

Although we are still talking in the future tense, the technology is already available. Start-ups, for instance, increasingly compete to mass-produce next-gen radar and LiDAR systems (3D road images) to enable autonomous driving. Original Equipment Manufacturers (OEM) are introducing these new technologies in their products, as can be seen in Figure 3 below. Many services have been launched and/or are gradually becoming available. Some of the ‘flagship events’ we have listed, indicate a continuous and accelerating introduction of developments with respect to autonomous driving. The autonomous driving car is increasingly becoming mature, and the industry is very active. In short, in 2017:

  • Ford acquired dynamic-shuttle firm Chariot (a mobility integrator);
  • General Motors acquired Sidecar (sharing a ride as a new transportation service);
  • Volkswagen invested in cab-hailing start-up Gett (on demand mobility as counterpart to taxi app Uber);
  • Europcar acquired car-sharing start-up Bluemove (car sharing and hourly rental service);
  • GM bought Cruise Automation (autonomous driving tech firm);
  • Ford acquired Pivotal, a data analytics company to gain valuable customer insights;
  • Waymo, the former Google self-driving car project in 2009, is ready for the next phase: their driverless transport services will soon be introduced to the public;
  • Audi introduced their first production car (the new A8) with level-3 autonomous driving functionality in September 2017.

C-2017-4-Groen-03-klein

Figure 3. KPMG UK Mobility 2030 Scenario Analysis – Stretch Case ([KPMGUK17]).

Fully accelerating, or do we anticipate short circuits and power outages?

As depicted above, technology developments are accelerating, and car manufacturers design new computers and algorithms at an increasing pace. But are we pushing this, or do we really understand what customers really value? Do we have the right skill set to integrate/aggregate other ecosystem elements? If all technology is readily available, what is limiting the speed of developments?

  • change of regulations (approval from European and national level to drive autonomously);
  • new insurance models (shift of accountability from driver to car);
  • privacy regulations (possibly limit the transfer of data and usage of data by platforms to offer personalised services);
  • service integrators and providers that take full responsibility for the whole value chain;
  • the roll out of 5G mobile connection technology (specifically the required short latency times) that enables communication between vehicles;
  • smart infrastructure with digital traffic beacons and road signs (to avoid misinterpretations of the algorithms);
  • local area networks standard (cross brand communication);
  • customer adaption.

The autonomous car must be a ‘good robot’

In autonomous cars, the importance of algorithms and the way of checking these ‘digital brains’ should not be underestimated. As often the possibilities are endless, but do we still understand the choices that are made by technology?

We have seen an increasing number of applications where humans depend on algorithms. The autonomous car has to make ethical decisions, based on algorithms that have been developed by IT specialists. How do we know the car makes the right decision?

As we already explained in our report The connected car is here to stay [KPMGNL17-2], these topics will be increasingly important for safeguarding consumer trust. An example is the working of a navigation system. To guide the driver from A to B, it is essential that the system meets the following conditions:

  • the quality of the data needs to be good;
  • the algorithm, basically the instructions given, must be correct;
  • the route advice should serve the interests of the driver; it should be an independent advice of course, it should not be the case that the algorithm has a preference for a route along a particular brand of fuel stations or shopping centres.

Conclusion

Autonomous driving is reinventing the value chain and new platform based business models will be increasingly introduced. Next to a transportation object, the autonomous car is also more and more a platform that enables extra digital revenues, which will become a very important source of income for the automotive industry. However, the importance of the algorithms and the way of checking these ‘digital brains’ should not be underestimated.

Although accountants primarily evaluate financial statements and annual reports, the accountancy sector can certainly play a key role in this game. Assurance by accountants has been a proven method in business to provide confidence and trust to users. Accountants have the capabilities to provide ‘assurance of systems’ about the accuracy of the processes and algorithms. This requires a considerable investment in a rapidly changing market. However, public trust in this modern digital society should be handled with care, as this trust could very well be the bottleneck for a quick adoption of the autonomous driving vehicle. Might this public trust be key for a fast, digital transformation to the new mobility ecosystem?

The Netherlands is ready for the autonomous vehicle

The Dutch ecosystem for the autonomous vehicle is ready. The intensively used Dutch roads are very well developed and maintained. Many different large road construction projects have been finished during recent years ([RIJK17-1]). Other indicators, like the telecom infrastructure, are also very strong, as can be seen in the yearly OpenSignal top 10 for 4G coverage footprint [OPSI17], in which the Netherlands holds a high position.

In addition, the Dutch Ministry of Infrastructure has opened the public roads to large-scale tests with self-driving passenger cars and lorries. Since 2015, the Dutch rules and regulations have been amended to allow large-scale road tests ([RTL17], [TNO17]).

In collaboration with the federal road authority (Rijkswaterstaat), and the ministry of Infrastructure, the federal vehicle authority (RDW) has the option of issuing an exemption for self-driving vehicles ([RIJK17-2]). Companies that wish to test self-driving vehicles must first demonstrate that the tests will be conducted in a safe manner. To that end, they need to submit an application for admission.

Public/private partnership do further accelerate the development of automotive expertise and innovation capacity. Strong examples are the Automotive High Tech Campus in the Eindhoven area and the connected TU Eindhoven university ([TUEI17]), which has a specific Smart Mobility faculty. Start-ups such as Amber Mobility ([TECR17]) are challenging existing parties and are broadening existing beliefs and behaviours.

References

[Alst16] M.W. Alstyne, S.P. Choudary, G.G. Parker, Platform Revolution: How Networked Markets Are Transforming the Economy and How to Make Them Work for You, W.W. Norton & Co., 2016.

[ANWB17] ANWB, Filegroei zet door in 2016, ANWB.nl, https://www.anwb.nl/verkeer/nieuws/nederland/2016/december/filezwaarte-december-2016, 2017.

[BOVA17] BOVAG, Mobiliteit in cijfers, BOVAGrai.info, http://bovagrai.info/auto/2016/bezit/1-2-ontwikkeling-van-het-personenautopark-en-het-personenautobezit/, 2017.

[Chou15] S.P. Choudary, Platform Scale: How an emerging business model helps start-ups build large empires with minimum investment, Platform Thinking Labs, 2015.

[Evan16] R.S. Evans, R. Schmalensee, Matchmakers: The New Economics of Multisided Platforms, Harvard Business Review Press, 2016.

[KPMG17] KPMG, Global Automotive Executive Survey, KPMG, 2017.

[KPMGNL17-1] KPMG NL, Platform Advisory – Talkbook, Amstelveen: KPMG Advisory N.V., 2017.

[KPMGNL17-2] KPMG NL, The Connected Car is here to stay, Amstelveen: KPMG Advisory N.V., 2017.

[KPMGUK17] KPMG UK, Mobility 2030, KPMG UK, 2017.

[Morr16] D.Z. Morris, Today’s cars are parked 95% of the time, Fortune.com, http://fortune.com/2016/03/13/cars-parked-95-percent-of-time/, 2016.

[OPSI17] OpenSignal, State of Mobile Networks: Netherlands (September 2017), Opensignal.com, https://opensignal.com/reports/2017/09/netherlands/state-of-the-mobile-network, 2017.

[RIJK17-1] Rijksoverheid, Infrastructure PPP projects, Government.nl, https://www.government.nl/topics/public-private-partnership-ppp-in-central-government/ppp-infrastructure-projects, 2017.

[RIJK17-2] Rijksoverheid, The Netherlands as a proving ground for mobility, Government.nl, https://www.government.nl/topics/mobility-public-transport-and-road-safety/truck-platooning/the-netherlands-as-a-proving-ground/, 2017.

[RTL17] RTL Nederland, Brabant laat zelfrijdende auto’s testen op de openbare weg, RTL Nieuws, https://www.rtlnieuws.nl/geld-en-werk/brabant-laat-zelfrijdende-autos-testen-op-de-openbare-weg, 2017.

[TECR17] TechCrunch, Amber Mobility to launch self-driving service in the Netherlands by 2018, TechCrunch.com, https://techcrunch.com/2017/04/25/amber-mobility-to-launch-self-driving-service-in-the-netherlands-by-2018/, 2017.

[Tiwa13] A. Tiwana, Platform Ecosystems, Elsevier Science & Technology, 2013.

[TNO17] TNO, Truck platoon matching paves the way for greener freight transport, TNO.nl, https://www.tno.nl/en/focus-areas/urbanisation/mobility-logistics/reliable-mobility/truck-platoon-matching-paves-the-way-for-greener-freight-transport/, 2017.

[TUEI17] TU Eindhoven, Strategic Area Smart Mobility, TU Eindhoven University of Technology, https://www.tue.nl/en/research/strategic-area-smart-mobility/, 2017.

25 mei 2018 komt in zicht: handvatten om te voldoen aan nieuwe privacywetgeving

Met de invoering van de nieuwe privacywetgeving stellen veel organisaties de vraag welke maat­regelen ingericht moeten worden om hieraan te kunnen en blijven voldoen. Dit artikel biedt een raamwerk dat inzicht geeft in de goede gebruiken van zeven onderzochte organisaties uit verschillende sectoren, en kan door andere organisaties worden gebruikt als uitgangspunt of referentiekader bij het opzetten van een privacyprogramma.

De maatregelen zijn samengevat in een generiek raamwerk, gebaseerd op het People-, Process- en Technology-model. De onderzochte organisaties noemen drie maatregelen die prioriteit verdienen bij de bescherming van persoonsgegevens: 1) het adequaat beveiligen van data, 2) op basis van de aard en activiteiten van de organisatie (welke privacygevoelige gegevens worden verwerkt?) beleid bepalen voor de omgang met dataprivacy en 3) privacybewustzijn creëren bij de werknemers.

Introductie: de komst van de Algemene Verordening Gegevensbescherming

Tot 2016 was de invoering van de Wet bescherming persoonsgegevens (Wbp) uit 2001 de laatste omvangrijke verandering in de Nederlandse privacywetgeving ([OVER01]). Dit betrof de implementatie van de Europese Privacyrichtlijn uit 1995 ([EUPA95]). In 2016 is er, na jarenlang overleg tussen de lidstaten van de Europese Unie (EU), overeenstemming bereikt over de inhoud van de Algemene Verordening Gegevensbescherming (AVG, in het Engels GDPR genoemd).

De AVG is op 25 mei 2016 reeds in werking getreden, en heeft een implementatietijd van twee jaar; organisaties hebben derhalve tot 25 mei 2018 de tijd om de eisen van de AVG te implementeren. Op die datum zullen de Wbp en de Europese Privacyrichtlijn ingetrokken worden en vanaf dan wordt de AVG gehandhaafd. Met de komst van de AVG worden de privacyeisen strenger en uitgebreider. Bovendien worden alle privacyeisen in de verschillende EU-lidstaten gelijkgetrokken, omdat een EU-verordening rechtstreeks van toepassing is op elke lidstaat. Hiermee worden onderlinge verschillen voorkomen. Daarnaast is de geografische scope van de AVG uitgebreider dan de huidige wetgeving, aangezien de AVG van toepassing is op alle organisaties die persoonsgegevens van inwoners uit de EU verwerken. Hiermee heeft de AVG een extraterritoriale werking en treft deze wetgeving ook organisaties buiten de EU.

Tot slot worden de boetes door de AVG flink hoger. Met de komst van de AVG kan een boete oplopen tot wel 20 miljoen euro, of 4% van de wereldwijde jaaromzet (de hoogste waarde telt). Deze boetes kunnen per overtreding worden opgelegd (bijvoorbeeld voor het overschrijden van bewaartermijnen en het niet op orde hebben van een verwerkingenregister), waarbij de hoogte wel afhankelijk is van de handhavingskracht van de Autoriteit Persoonsgegevens (AP) en de mate waarin de betreffende organisatie toereikende privacymaatregelen heeft getroffen. Vooruitlopend op de AVG kan de AP sinds 1 januari 2016 al boetes opleggen, welke kunnen oplopen tot 820.000 euro of 10% van de jaarlijkse omzet ([OVER16]). Met deze wijzigingen geeft de EU een belangrijk signaal af dat de privacy van haar burgers zeer serieus wordt genomen.

Door de invoering van deze wetgeving stellen veel organisaties de vraag welke maatregelen ingericht moeten worden om te voldoen en te blijven voldoen aan de AVG. Deze vraag was de aanleiding om onderzoek te doen naar de wijze waarop privacyvolwassen organisaties zich voorbereiden op de aangescherpte privacywetgeving. ‘Privacyvolwassen’ houdt in dat organisaties zich minimaal op ‘niveau 3’ qua privacyvolwassenheid bevinden. Gebaseerd op het CMM-model, houden de niveaus het volgende in:

  • niveau 1: ad-hoc/informeel;
  • niveau 2: beheerste processen;
  • niveau 3: vastgestelde (bedrijfsbrede) processen;
  • niveau 4: voorspelbare processen;
  • niveau 5: geoptimaliseerde processen.

Wij zien dat organisaties die de huidige privacywetgeving al goed en volledig hebben geïmplementeerd, en/of tijdig zijn begonnen met de voorbereiding op en implementatie van de AVG, zich op een hoger volwassenheidsniveau bevinden. Hierbij is het goed om te realiseren dat alleen de eisen eenmalig implementeren niet genoeg is; compliant blijven aan de AVG is een continu doorlopend proces. Zo moet de documentatie blijvend worden bijgehouden, bewaartermijnen gemonitord, persoonsgegevens gecorrigeerd of ge-update waar nodig, en de werking van de benodigde (beveiligings)maatregelen geëvalueerd en aangepast waar nodig.

Door middel van interviews met de Data Protection Officers (DPO’s) van zeven organisaties – opererend in de criminele opsporing, de telecomsector, de bancaire sector en de energiesector – is inzicht verkregen in de maatregelen die organisaties hebben genomen of nog van plan zijn te nemen om te voldoen aan de AVG. De grootste veranderingen in de privacywetgeving ten opzichte van de huidige situatie zijn als uitgangspunt genomen voor deze interviews. De verschillende benaderingen van de organisaties zijn samengevat in een generiek data-privacyraamwerk, gebaseerd op het People-, Process-, Governance- en Technology-model ([Leav65]). Dit raamwerk geeft inzicht in de goede gebruiken van de zeven onderzochte organisaties, en kan door andere organisaties worden gebruikt als uitgangspunt of referentiekader bij het opzetten van een eigen privacyprogramma.

Eisen en benodigdheden voor implementatie

De voornaamste wijzigingen ten opzichte van de huidige situatie ([EUCO15]), zoals mede aangegeven door de DPO’s, zijn hieronder uiteengezet. Hierbij wordt een aantal handvatten geboden met betrekking tot de voorbereidingen die organisaties kunnen treffen om deze eisen te implementeren.

1. Accountability

In de AVG worden meer eisen gesteld aan de ‘accountability’ van een organisatie. Accountability houdt kortweg in dat organisaties aantoonbaar compliant zijn en verantwoording kunnen afleggen richting de toezichthouders en betrokkenen over de behoorlijke, zorgvuldige en transparante omgang met persoonsgegevens. Zo zullen organisaties actief een verwerkingsregister moeten bijhouden, met daarin onder meer de categorieën: persoonsgegevens, betrokkenen en (eventuele) ontvangers, verwerkingsdoeleinden, bewaartermijnen en beveiligingsmaatregelen. Dit moet een organisatie op elk moment aan de toezichthouder kunnen laten zien. Andere accountability-eisen binnen de AVG zijn bijvoorbeeld de vereiste van het schriftelijk vastleggen van rollen, taken en verantwoordelijkheden, en een schriftelijk privacy- en informatiebeveiligingsbeleid (inclusief datalekprocedure) op te stellen. Om te voldoen aan de verwachting van accountability, zoals uitgewerkt in de AVG, geven de verschillende DPO’s aan dat de volgende aspecten van belang zijn:

Organisatorische, technologische en fysieke beveiligingsmaatregelen

Zonder goede beveiligingsmaatregelen kunnen persoonsgegevens niet adequaat beschermd worden. Hierbij zijn zowel organisatorische maatregelen (bijvoorbeeld het trainen van personeel of het opstellen van een beveiligingsbeleid, het naleven van bewaartermijnen), fysieke maatregelen (zoals sloten op kasten en deuren en een pasjessysteem om binnen te komen), als technologische maatregelen (anonimiseren, logische toegangsbeveiliging, zoals role-based-access controls, encryptie, firewalls, etc.) van groot belang.

Bijhouden van documentatie van de geïmplementeerde privacymaatregelen

Deze documentatie kan worden overgedragen aan de autoriteiten, om aan te tonen dat voldoende maatregelen zijn ingericht om de persoonsgegevens adequaat te beschermen. Bovenstaande organisatorische, fysieke en technologische maatregelen zullen derhalve adequaat gedocumenteerd moeten worden en periodiek worden geëvalueerd op hun werking.

Goed gebruikmaken van de privacypolicy en -statement

Een privacypolicy (ook wel privacybeleid genoemd) is een intern beleidsstuk over de wijze van handelen met betrekking tot persoonsgegevens. Het is van belang dat hierin een datalekprocedure en het informatiebeveiligingsbeleid schriftelijk wordt vastgelegd. Daarnaast is het goed om de bewaartermijnen per categorie persoonsgegevens, en de verantwoordelijke per categorie persoonsgegevens, in dit document op te nemen.

Een privacystatement is een document dat extern richting betrokkenen wordt gecommuniceerd, waarin zij worden geïnformeerd over de verwerking van hun persoonsgegevens. De inhoud hiervan moet aan een aantal wettelijke eisten voldoen; zo moeten betrokkenen geïnformeerd worden over de doelen van de gegevensverwerking, de grondslag, bewaartermijnen, hun rechten bij eventuele verstrekkingen aan derden en de contactgegevens van de verantwoordelijke organisatie. Door gebruik te maken van één privacystatement kan een organisatie het gebruik van (een overdaad aan) statements en expliciete toestemmingsvragen vermijden. Het privacystatement moet duidelijk (heldere taal), compact en volledig zijn.

Opt-in/opt-out-systeem

Met een adequaat opt-in/opt-out-systeem, waarbij organisaties een overzicht creëren van de toestemming van hun klanten voor de (additionele) verwerking van persoonsgegevens, kan een organisatie accountability aantonen tegenover de toezichthouder. Daarnaast kan dit register handvatten bieden voor het inwilligen van verzoeken met betrekking tot het intrekken van toestemming door betrokkenen.

2. Privacy-by-Design en Privacy-by-Default

In de AVG worden Privacy-by-Design en Privacy-by-Default genoemd als essentiële vereisten. Dit betekent dat privacy vanaf het begin van elke informatielevenscyclus en gedurende de hele looptijd ervan ingebed is. Dit betreft dus elk (nieuw) project/programma/systeem/tool waarin persoonsgegevens worden verwerkt.

Bij Privacy-by-Design zijn voorts de volgende basisprincipes van belang:

  • voorkomen is beter dan genezen;
  • privacy is de standaard;
  • integreren van gegevensbescherming en beveiliging in het ontwerp;
  • volledige functionaliteit;
  • end-to-end beveiliging;
  • zichtbaarheid en transparantie;
  • respect voor de privacy van de betrokkene: deze staat centraal.

Daarbij wordt door de DPO’s aangegeven dat een organisatie gebruik kan maken van de volgende middelen om Privacy-by-Design/Privacy-by-Default te waarborgen:

Het uitvoeren van Privacy Impact Assessments (PIA’s)

Het uitvoeren van een PIA, voordat een nieuw project, proces of het gebruik van een nieuwe tool begint, borgt dat de bescherming van persoonsgegevens al aan het begin van de ontwikkeling van nieuwe initiatieven wordt geadresseerd. Een PIA identificeert de privacyrisico’s en helpt de benodigde beheersmaatregelen te kiezen en implementeren. Het uitvoeren van een PIA en de uitkomsten ervan vormen dan ook een goed uitgangspunt voor het toepassen van Privacy-by-Design. Het uitvoeren van PIA’s wordt met de komst van de AVG verplicht, wanneer de voorgenomen gegevensverwerking (waarschijnlijk) een hoog risico voor de privacybescherming kent (zoals bij profiling). Het is echter raadzaam om het uitvoeren van PIA’s niet alleen tot deze situaties te beperken. De uitkomsten van PIA’s helpen namelijk om privacyrisico’s vroegtijdig te identificeren en  beheersen. Voor de verplichte PIA’s schrijft de AVG voor wat er in moet komen te staan (systematische beschrijving, noodzakelijkheids- en evenredigheidstoets, impactanalyse en benodigde beheersmaatregelen).

Verder kan bijvoorbeeld het Toetsmodel PIA Rijksdienst voor de overheid gebruikt worden (in sommige gevallen is dit verplicht) en de NOREA-template voor overige organisaties. Alle geïnterviewde DPO’s gaven aan dat binnen hun organisaties al gebruik wordt gemaakt van PIA’s. Hierbij geven zij specifiek aan dat een PIA ook gebruikt kan worden om het privacybewustzijn binnen de organisatie te vergroten. Dit is echter wel afhankelijk van wie de PIA uitvoert en wie er betrokken worden bij het uitvoeren ervan. Als het uitvoeren van de PIA’s door interne of externe specialisten gebeurt, beklijft het minder bij de direct betrokken medewerkers en managers. Wij zien dat werknemers binnen organisaties meer privacybewust worden wanneer de medewerkers die direct werken met persoonsgegevens en de bijbehorende projecten/systemen/tools, actief betrokken worden bij het uitvoeren van een PIA (bijvoorbeeld door het houden van interviews). De actieve betrokkenheid van deze medewerkers vergroot ook de kans dat een juiste en volledige weergave van de risico’s en werkwijzen in de praktijk in kaart wordt gebracht.

Opt-in/opt-out voor klanten

Met het regelen van een opt-in/opt-out-mogelijkheid hebben betrokkenen de keuze welke informatie zij verstrekken aan de betreffende organisatie. Een opt-out mag niet leiden tot het niet kunnen afnemen van een dienst van de organisatie. Zoals eerder genoemd is het raadzaam om deze opt-ins en opt-outs in een systeem vast te leggen.

Technologische beveiligingsmaatregelen

In het kader van (het bereiken van) Privacy-by-Design worden vaak ook Privacy Enhancing Technologies (PET’s) genoemd. Dit is de verzamelnaam voor verschillende technieken in de informatiesystemen om de bescherming van persoonsgegevens te ondersteunen. Hieronder vallen anonimiserings- en pseudonimiseringstechnieken, de encryptie van persoonsgegevens of het gebruik van geanonimiseerde gegevens in testomgevingen. Zie ook [Koor11] voor meer informatie over Privacy Enhancing Technologies.

Dataminimalisatie en bewaartermijnen

Alleen de persoonsgegevens die nodig zijn voor het bereiken van het doel mogen worden verzameld, verwerkt en opgeslagen. Als de persoonsgegevens niet (meer) nodig zijn voor het bereiken van het doel of voldoen aan een wettelijke bewaartermijn, moeten ze worden verwijderd. Het is van belang om deze bewaartermijnen (voor zover mogelijk) van tevoren te bepalen voor nieuwe verwerkingen. Wij adviseren organisaties een inventarisatie te maken van de bewaartermijnen van de al opgeslagen/gebruikte persoonsgegevens. Organisaties zijn verplicht om alle (voorgenomen) bewaartermijnen te documenteren.

Classificatie van persoonsgegevens

Op basis van het privacyrisico met de corresponderende beveiligingsniveaus en geldende wettelijke eisen kan een overzicht worden gecreëerd van de mate van bescherming en/of additionele wettelijke eisen per categorie. Onze ervaring is dat veel organisaties een onderscheid kunnen (en/of moeten) maken tussen ‘gewone persoonsgegevens’, ‘bijzondere persoonsgegevens’ en ‘gevoelige persoonsgegevens’. Voor bijzondere persoonsgegevens (zoals ras, etniciteit, gezondheidsgegevens, etc) gelden additionele wettelijke eisen (zoals een extra wettelijke grondslag) en zwaardere beveiligingseisen. Persoonsgegevens die niet in de AVG als bijzonder zijn aangemerkt, maar wel naar hun aard gevoelig zijn (zoals financiële informatie), dienen extra beveiligd te worden, omdat de negatieve impact voor betrokkenen erg groot kan zijn in het geval van een datalek. Organisaties dienen de categorieën van persoonsgegevens die worden verwerkt schriftelijk te documenteren en op verzoek van de toezichthouder te overhandigen.

Een datalekprocedure

Voor het waarborgen van Privacy-by-Design is een datalekprocedure (ook) van belang.

Aanstellen van privacyteams

Idealiter worden er privacyteams aangesteld met taken en verantwoordelijkheden om privacy te borgen binnen de organisatie. Deze teams controleren onder meer dat er geen privacygevoelige informatie rondslingert en hebben een mandaat om collega’s aan te spreken, zodat het privacybewustzijn van werknemers wordt vergroot.

3. Rechten van betrokkenen

In de AVG worden de rechten van betrokkenen uitgebreid met het recht op dataportabiliteit, en wordt het ‘recht om vergeten te worden’ verankerd. Ook wordt onder andere het recht op informatie uitgebreid; vóórdat de gegevensverwerking begint, dient de verantwoordelijke de betrokkene over meer zaken te informeren dan nu het geval is (bijvoorbeeld over bewaartermijnen, doorgifte van gegevens, etc). Het is voor organisaties van belang dat deze rechten ook daadwerkelijk ingewilligd kunnen worden. Omdat het verwerken van persoonsgegevens vaak nodig is voor organisaties om diensten te kunnen verlenen aan hun klanten (de betrokkenen), verwachten de geïnterviewde DPO’s niet dat betrokkenen het recht om vergeten te worden vaak zullen gebruiken. De impact hiervan lijkt daarmee klein.

Wel moet worden opgemerkt dat wanneer betrokkenen het vermoeden hebben dat hun persoonsgegevens niet zorgvuldig worden verwerkt, hier snel verandering in kan komen. De DPO’s geven hierbij aan dat:

Er geen aanleiding moet zijn voor klanten om van deze rechten gebruik te maken.

Het bewustzijn van medewerkers voor het zorgvuldig omgaan met en het verwerken van persoonsgegevens en de hieraan gerelateerde risico’s is dus van uiterst belang.

Bij de start van nieuwe projecten rekening dient  te worden gehouden met de rechten van betrokkenen (Privacy-by-Design).

Het implementeren van de mogelijkheid om persoonsgegevens weer uit systemen te verwijderen is hier een voorbeeld van.

Het belangrijk is dat men weet waar welke data wordt opgeslagen.

Voor complexe organisaties kan dit proces worden ondersteund door een data-mappingtool, waarin wordt geregistreerd welke persoonsgegevens zijn opgeslagen in welk systeem/project/tool. Ook het eerder genoemde opt-in/opt-out-raamwerk kan van groot belang zijn om bij te houden welke klanten voor welke doeleinden toestemming hebben gegeven of deze selectief hebben ingetrokken. Onze ervaring hierbij is dat het van groot belang is om ongestructureerde data in kaart te brengen; wij zien vaak dat het feit dat de meeste data ongestructureerd is, wordt vergeten of onderschat. Het is daarbij voor veel organisaties een uitdaging om dit goed in kaart te brengen. Bij ongestructureerde data kan gedacht worden aan persoonsgegevens die worden uitgewisseld tussen de betrokkene en (bijvoorbeeld) de klantenservice van een bedrijf, via e-mail, social media, telefoon of brief. Het in kaart brengen van alle data is belangrijk om aan de overige verplichtingen van de AVG te voldoen, alsmede te weten waar datalekken kunnen ontstaan.

Dataminimalisatie is noodzaak.

Persoonsgegevens mogen alleen worden verzameld en opgeslagen als deze noodzakelijk zijn om het doel te bereiken. Als ze niet (meer) nodig zijn, mogen ze niet (meer) worden verzameld en opgeslagen en dienen ze te worden verwijderd of geheel te worden geanonimiseerd. Dataminimalisatie wordt als een kritieke factor gezien bij het in goede banen leiden van de rechten die betrokkenen kunnen uitoefenen. Hoe minder persoonsgegevens worden opgeslagen, hoe minder complex de processen voor het voldoen aan deze rechten, hoe minder te beschermen (en te back-uppen), hoe minder de datakwaliteit bewaakt hoeft te worden en hoe minder groot de impact is als persoonsgegevens worden gelekt (bijvoorbeeld door middel van een hack).

4. Data Privacy Officer (DPO)

Het aanstellen van een DPO wordt voor sommige organisaties verplicht (bijvoorbeeld overheidsorganisaties of organisaties die op grote schaal bijzondere persoonsgegevens verwerken). Ook de organisaties waarvoor een DPO niet verplicht is, raden wij aan om een DPO aan te stellen, of in ieder geval een Privacy Officer, zodat de coördinatie van de privacy belegd is binnen de organisatie. Het voornaamste verschil tussen een DPO en Privacy Officer is dat de DPO een volledig onafhankelijke functie is binnen een organisatie. De geïnterviewde DPO’s geven hierbij ook aan dat er dankzij een DPO een duidelijker overzicht kan worden verkregen van de maatregelen die al genomen zijn en nog genomen moeten worden om privacy adequaat te beschermen, alsmede hoe hierop goed toezicht gehouden kan worden. Hierbij is het van belang dat de DPO genoeg middelen en support van het management krijgt om de privacy naar behoren te kunnen inrichten en borgen. Zonder de betrokkenheid van het hogere management staat privacy niet bovenaan de agenda en krijgt het vaak niet de aandacht die het verdient. Wij zien bij veel organisaties dat de DPO samenwerkt met de Security Officer en/of CISO, zodat de privacy en informatiebeveiliging goed op elkaar kunnen worden afgestemd. Ook zien we dat bij grote organisaties een privacyteam wenselijk is (met de DPO ‘aan het hoofd’), waarin de belangrijkste stakeholders van een organisatie, als het gaat om privacy, plaatsnemen (bijvoorbeeld iemand van de juridische afdeling, klantenservice, IT, marketing, HR en andere medewerkers die (veel) met persoonsgegevens werken). Op deze manier kan de privacy geborgd worden binnen de gehele organisatie.

Een kanttekening bij het bovenstaande is overigens dat wij zien dat het voor veel organisaties een uitdaging is om voldoende gekwalificeerd personeel aan te stellen en/of dat privacy niet bij de juiste medewerkers wordt belegd. Het is van belang dat aangestelde medewerkers kennis van privacy hebben, maar ook dat er verschillende expertises worden gewaarborgd (juridisch, projectmanagement, technisch, etc). Daarom is het ook aan te bevelen om tijdig benodigd personeel te werven en bewustzijn te creëren bij het management van de organisatie, aangaande het belang van de investering in adequaat personeel.

5. Meldplicht datalekken

Organisaties zijn niet altijd in staat om een datalek met (gevoelige) persoonsgegevens te voorkomen. Dit vereist de AVG ook niet; het niet melden ervan is hetgeen beboet kan worden. Het verplicht melden van datalekken aan de Autoriteit Persoonsgegevens en – afhankelijk van het type lek en impact – de betrokkenen, maakt sinds 1 januari 2016 al deel uit van de Nederlandse Wbp. Hiermee is Nederland vooruitgelopen op de komst van de AVG, waarin deze meldplicht van datalekken ook is opgenomen. De meldplicht datalekken houdt in dat organisaties een datalak binnen 72 uur na de ontdekking ervan moeten melden aan de betrokkenen. Dit is in lijn met de Europese tijdslimiet. Door de geïnterviewde DPO’s wordt aangegeven dat het onderstaande voor de meldplicht datalekken van groot belang is:

Een overzichtelijke data-infrastructuur

Wanneer er een datalek plaatsvindt, wordt zo gewaarborgd dat de organisatie de Autoriteit Persoonsgegevens (en eventueel de betrokkenen) van duidelijke informatie kan voorzien, zoals welke persoonsgegevens zijn gelekt en wat de impact hiervan is op de betrokkenen. Dit kan boetes vermijden, dan wel inperken. Ook kan dit overzicht gebruikt worden om de (eventuele) negatieve impact voor betrokkenen te beperken; doordat duidelijk is waarnaartoe de data (mogelijk) is gelekt, kunnen er immers gerichte beheersmaatregelen genomen worden.

Technologie

Technologische middelen, zoals anonimisering, encryptie en toegangsrestrictie kunnen het risico op datalekken verminderen. Tools waarmee overtallige persoonsgegevens opgespoord en verwijderd kunnen worden uit de systemen zijn eveneens een mogelijke oplossing. Deze techniek is veelal nog onontgonnen terrein, omdat de huidige bekende tools dit nog nauwelijks ondersteunen en deze nog volop in ontwikkeling zijn.

Datalek-responseprocedure

Het reageren op datalekken is een onderdeel van crisismanagement en in die zin niet nieuw. Het gaat hierbij om het bijeenroepen van een multidisciplinair team om een datalek te onderzoeken, de mogelijke consequenties te verkleinen en om binnen 72 uur na de ontdekking ervan het datalek bij de Autoriteit Persoonsgegevens te melden. Hierbij is het van belang om te realiseren dat deze 72 uur voor de hele keten geldt, dus ook als het lek bijvoorbeeld bij een leverancier (zoals een cloud provider) plaatsvindt. Dit onderwerp dient daarom goed geborgd te zijn in de contractuele afspraken met bewerkers. Ook de communicatie over datalekken is van groot belang, zowel richting de Autoriteit Persoonsgegevens als richting de betrokkenen (indien van toepassing), maar ook intern naar de medewerkers en eventuele leveranciers, als het gaat om de afhandeling van het datalek.

Opsporen datalekken van binnenuit

Proactief zoeken naar eventuele datalekken binnen de organisatie door het uitvoeren van beveiligingstesten is belangrijk om onder meer de impact van een datalek te mitigeren (bijvoorbeeld door indien mogelijk het lek gelijk te dichten, of zelfs preventief de systemen offline te halen).

Centrale privacyhelpdesk

Het melden van datalekken dient zo eenvoudig mogelijk te zijn voor de werknemer. Hierbij kan gebruik worden gemaakt van een centrale privacyhelpdesk, die geraadpleegd kan worden wanneer een (mogelijk) datalek zich voordoet, en aangeeft waar het datalek gemeld kan worden.

6. Additionele aandachtspunten voor organisaties

Uit bovenstaande maatregelen komt naar voren dat de DPO’s het noodzakelijk achten dat organisaties de juiste mix van technologie en organisatorische processen implementeren om persoonsgegevens adequaat te beschermen. Verder zijn de ondertaande overkoepelende aandachtspunten van invloed op de privacyaanpak van een organisatie.

Samenwerking met andere organisaties

Organisaties kunnen leren van elkaars fouten. Daarom is het van belang dat zij met elkaar communiceren over de privacymaatregelen die zijn getroffen en de uitdagingen waar zij mee worstelen. Daarnaast zijn de meeste organisaties voor hun privacycompliance sterk afhankelijk van andere partijen, waarmee ze bijvoorbeeld klant- of medewerkersgegevens uitwisselen. Vanwege dit laatste punt bevelen wij ook aan om goede bewerkersafspraken te maken (dit is ook verplicht), en de bewerkers hier ook periodiek op te auditen.

Privacystrategie uitstippelen

Organisaties dienen rekening te houden met de cultuur bij het opzetten van een privacystrategie. Voor het opstellen van een privacystrategie dienen organisaties zichzelf vragen te stellen als: ‘Wat wordt er van mij als organisatie verwacht? Wat is mijn ambitie: wil ik de beste zijn op het gebied van privacy, of is voldoen aan de wet- en regelgeving voldoende? Hoe bewust zijn mijn medewerkers op het gebied van informatiebeveiliging en privacy? Wat is de impact als ik persoonsgegevens niet voldoende bescherm?’. Wat betreft die laatste vraag moeten organisaties inventariseren welke persoonsgegevens zij verwerken, en hoeveel tijd en middelen zij willen besteden aan de bescherming hiervan, rekening houdend met het risico op een datalek. ‘Het voldoen aan wet- en regelgeving is van belang, maar het voldoen aan de verwachting van de klant is misschien nog wel belangrijker. Uiteindelijk gaat het toch om de bescherming van de klant,’ aldus één van de DPO’s. Concrete voorbeelden hiervan zijn het niet verkopen of verstrekken van persoonsgegevens aan derde partijen waar niet over is gecommuniceerd, of het individueel profileren en sturen van aanbiedingen als er een product via internet is verkocht. De aard van de organisatie (welke privacygevoelige gegevens worden verwerkt en in welke mate) beïnvloedt dus hoe de organisatie om dient te gaan met het onderwerp privacy. Onze ervaring is hierbij dat organisaties privacy ook steeds vaker willen gebruiken als ‘unique selling point’ om zich positief te onderscheiden van concurrenten.

Bewustzijn van medewerkers

Ten slotte onderstrepen alle DPO’s het belang van het verhogen van het bewustzijn onder werknemers voor het adequaat beschermen van persoonsgegevens. Werknemers worden gezien als essentieel wanneer het gaat om het ontdekken van een datalek, omdat zij hier in de meeste gevallen het eerst tegenaan lopen. Het is daarom van groot belang dat werknemers een datalek kunnen herkennen en weten hoe zij hiermee om dienen te gaan. Alle (mogelijke) datalekken moeten gemeld worden bij bijvoorbeeld de DPO, zodat de DPO dit verder kan oppakken, en er geen datalek over het hoofd wordt gezien. Daarnaast is onze ervaring ook dat medewerkers in een aantal gevallen (onbewust) de ‘veroorzakers’ zijn van datalekken. Denk hierbij aan het laten rondslingeren of kwijtraken van USB-sticks of papieren met gevoelige informatie. Wij bevelen daarom aan om alle medewerkers binnen een organisatie te trainen; niet alleen de medewerkers die veel met persoonsgegevens werken, maar ook de receptie-, catering-, beveiligings- of schoonmaakmedewerkers instrueren om goed op te letten en verdachte/gevaarlijke situaties te herkennen (zoals niet-vergrendelde computers). Bewustzijn gaat verder dan alleen datalekken; medewerkers dienen goed op de hoogte te zijn van de voor hen belangrijke privacymaatregelen en andere typen privacy-incidenten. Het verhogen van privacybewustzijn van personeel kan op verschillende manieren worden bewerkstelligd.

Evenementen organiseren waarbij privacy centraal staat

Door middel van een privacycampagne (voorlichting geven, informatie verschaffen via intranet, posters) kunnen organisaties ervoor zorgen dat privacy onder de aandacht wordt gebracht van de werknemers. Zo gaf een van de DPO’s aan dat middels een privacycampagne uitdrukkelijk is gecommuniceerd dat men liever heeft dat er honderd (mogelijke) datalekken te veel worden gerapporteerd dan één te weinig, om werknemers zo aan te sporen vooral niet terughoudend te zijn met het rapporteren van een mogelijk datalek. Het is aan te raden om privacycampagnes te combineren met andere gerelateerde awarenesscampagnes, zoals beveiliging en privacy, integriteit en privacy, klantbediening en privacy, et cetera.

Trainingen (zowel verplichte e-learning per functie als klassikaal)

Het privacybewustzijn onder medewerkers is erg belangrijk bij het verzamelen van gegevens. Om privacy aan het begin van de (interne) keten (het begin van een project of gebruik van een tool) te implementeren, moeten ook de medewerkers aan het begin van de keten zich bewust zijn van de risico’s van het opslaan van privacygevoelige informatie. Trainingen kunnen in deze behoefte voorzien, zodat ook werknemers een gefundeerde beslissing kunnen nemen welke persoonsgegevens noodzakelijk zijn voor het verwerkingsdoel. Bij deze trainingen kan gebruik worden gemaakt van recente privacy-onderwerpen/-schandalen om privacy nog beter onder de aandacht te brengen. Ook kan besloten worden dat alle medewerkers verplicht zijn te slagen voor een (interne) privacytest.

Ludieke acties

Met een ludieke actie, bijvoorbeeld door middel van het uitdelen van USB-sticks, kan een organisatie zelf proberen haar medewerkers ‘privacygevoelige’ informatie te laten lekken, zonder dat zij dit door hebben. Dit creëert veel bewustzijn bij medewerkers, aangezien mensen leren door fouten te maken. Denk hierbij aan phishingacties via de mail en social engineering via de telefoon.

Privacyvragenuur

Een uurtje waarin de DPO beschikbaar is voor allerhande privacyvragen.

Een data-privacyraamwerk met het oog op de AVG

De maatregelen die genoemd zijn door de verschillende DPO’s, al dan niet reeds geïmplementeerd of met de wens om deze te implementeren, zijn geanalyseerd en samengevat in een generiek data-privacyraamwerk. Dit raamwerk verschaft inzicht in de maatregelen die organisaties dienen te nemen om persoonsgegevens te beschermen. Organisaties kunnen dit raamwerk gebruiken om de juiste privacymaatregelen te treffen ter voorbereiding op de AVG.

C-2017-3-Koetsier-t01-klein

Tabel 1. Data-privacyraamwerk. [Klik op de afbeelding voor een grotere afbeelding]

Conclusie: implementeer maatregelen op basis van de geïdentificeerde privacyrisico’s

Organisaties staan voor een uitdaging om de juiste maatregelen te treffen in voorbereiding op de veranderingen die de AVG met zich meebrengt. Om hier adequaat op te reageren kunnen organisaties het generieke data-privacyraamwerk als referentiekader gebruiken. Organisaties dienen dit te concretiseren en in lijn te brengen met de aard van hun eigen organisatie en het gestelde ambitieniveau, zodat het juiste niveau van maatregelen wordt getroffen op basis van de privacyrisico’s die de organisatie loopt.

De eerste stappen hierbij zijn het inrichten van privacyverantwoordelijkheden, het creëren van inzicht in de persoonsgegevens die worden verwerkt door verschillende afdelingen, het inventariseren van derde partijen om bewerkersovereenkomsten af te sluiten als deze er nog niet zijn, en het inrichten van een datalekproces. Door middel van deze activiteiten kan op de korte termijn resultaat worden geboekt en een basis worden gelegd voor het verder inrichten van privacyprocessen.

Literatuur

[EUCO15] European Commission, Agreement on Commission’s EU data protection reform will boost Digital Single Market, 15 december 2015.

[EUPA95] Europees Parlement en de Raad, Richtlijn 95/46/EG betreffende de bescherming van natuurlijke personen in verband met de verwerking van persoonsgegevens en betreffende het vrije verkeer van die gegevens, 24 oktober 1995.

[Koor11] R.F. Koorn en J. ter Hart, Privacy by design: from privacy policy to privacy enhancing technologies, Compact 2011/0.

[Leav65] H. Leavitt, Applied organization change in industry: structural, technical and human approaches, ‘New perspectives in organizational research’, Chichester: Wiley, 1965.

[OVER01] Overheid.nl, Wet bescherming persoonsgegevens, 2001.

[OVER16] Overheid.nl, Boetebeleidsregels Autoriteit Persoonsgegevens 2016, Staatscourant, nr. 2043, 2016.

Cyber Threat Intelligence

Interview by Jeroen de Wit

Nowadays, one of the most booming topics in the cybersecurity market is Cyber Threat Intelligence. Jeroen de Wit, Cyber Defense Manager at KPMG and lead of KPMG Threat Management Services in the Netherlands, however notes a pollution of the term intelligence as used in the market. To further discuss this topic, he met with Joep Gommers to exchange ideas and thoughts. This conversation is recorded in this article and broken down into Cyber Threat Intelligence itself, its current challenges in the market and concludes with best practices for organizations to adequately deal with this highly relevant topic.

Joep Gommers is the founder and CEO of EclecticIQ, an applied cyber intelligence technology provider, enabling enterprise security programs and governments to bootstrap a threat intelligence practice.

Prior to EclecticIQ, Joep served as Head of Global Collection and Global Intelligence Operations at Cyber Threat Intelligence market-leader iSIGHT Partners (acquired by FireEye). Having worked around the world for large and small firms, Joep has acquired a depth and breadth understanding of market demands and the cyber intelligence industry. His vision of security programs is informed by the power of real threats and an ambition to build products that allow a leap forward in an organizations’ cyber security efforts.

Joep, to start: what do you define as Cyber Threat Intelligence?

At its core, intelligence is about reducing uncertainty. When uncertainty involves conflict around business objectives, intelligence serves to decrease business risks. Cyber intelligence reduces uncertainty in dealing with threats such as electronic crime, hacktivism, terrorism and espionage.

Reducing this uncertainty, and therefore managing these cyber risks, requires information that cyber adversaries prefer to conceal. Intelligence analysts need to uncover this concealed information using direct and indirect means of collecting and analyzing available information. Intelligence analysts proceed by establishing facts and then developing precise, reliable, and valid inferences for use in decision making. The resulting conclusions and predictions are extremely useful in operational planning for security operations, incident response, vulnerability management, risk management and board-level decision making.

Cyber Threat Intelligence follows the methods of traditional intelligence to focus on operational, tactical and strategic responses to cyber threats.

What trends are you seeing in the market of Cyber Threat Intelligence?

You might indeed say that the Cyber Threat Intelligence market is nowadays somewhat “polluted”. Organizations use the term to denote any type of data they sell in their feeds. This means that one organization offering solely technical indicators of an attack [e.g. hash values and known bad IP addresses] and another which is offering a strategic analysis of hacktivist group Anonymous or the involvement in cyber incidents by the Chinese government both use the term Cyber Threat Intelligence to denote their information.

However, we also see an increase in organizations that are nowadays able to properly define what type of information they are interested in. Furthermore, analysts such as Gartner are also better able to categorize Threat Information Providers based on the actual content they are delivering. As such, comparing these providers has become easier, similar to being able to compare firewalls or IPS/IDS providers, thereby allowing a better matching of providers to your needs. That is, once you have actually defined your needs, which is something that quite a few organizations are still doing wrong or at least insufficiently.

What would be the typical mistakes, or misjudgments, you see occurring in the field?

In general this would be organizations realizing that Cyber Threat Intelligence is a hot topic, possible even a required topic to be successful in Cyber Defense, and trying to get into execution before defining what and how they are exactly going to handle it.

Dealing with Cyber Threat Intelligence requires the understanding that there is a difference between (1) merely buying intelligence as a product and by means of a project and/or in an ad hoc manner include intelligence into your procedures and processes to add value at that moment versus (2) implementing a process, team and responsibility within your own organization to integrate intelligence in procedures for different layers within the organization. By merely doing the former, therefore with the absence of a continuous integrated process, applying threat intelligence is merely done on an ad hoc, spur of the moment basis. A snapshot of sorts.

So how should organizations approach Cyber Threat Intelligence in your view?

From our years of experience we have, amongst others, developed a Cyber Threat Intelligence Maturity Model[https://www.eclecticiq.com/resources/white-paper-threat-intelligence-maturity-model] which may provide guidance for those aspects that need taking into account. Above all, we have deduced the following seven best practices for organizations to take into account. Use them to your advantage!

1. Build for stakeholders

Creating business value from threat intelligence relies on the ability to understand the information needs and requirements of key stakeholders in the organization. These stakeholders are ultimately responsible for the deterrence, defeat and prevention of cyber threats. Start by understanding who the key stakeholders are, how and in what tone they prefer to consume intelligence, and what key intelligence requirements they need answered.

2. Drive urgency of organizational awareness of cyber threats

The potential application of threat intelligence spans across a wide range of operational, tactical and strategic issues that require both immediate action and long-term planning. Stakeholders have to be aware of the scope of threat intelligence, and how it can help them to control their exposure to the changing threat landscape. Successfully implementing a threat management capability requires buy-in by decision makers, and their appetite to invest will be proportional to how well internal stakeholders understand the value of threat intelligence.

3. Achieve organizational buy-in

All stakeholders should be comfortable with the plan for threat intelligence, including a shared vision, timing for a phased roll-out, known constraints and the expected measurable results. The key to any successful project is to cultivate an understanding of how much you want to accomplish, at what pace, in what steps and with what business constraints, whether in timing, resources or other factors. Make promises to the organization you can keep. Whether large or small.

4. Establish a Threat Management practice separately from IT Security

A Threat Management practice implements a threat intelligence process. To successfully plan, implement and operate such a practice requires specific intelligence competencies.

Threat intelligence is adjacent and related to IT Security, but it is a distinct competency with clear lines of demarcation. A separate Threat Management practice ensures the availability of the relevant competencies needed to architect, plan and implement threat intelligence processes and procedures, including the acquisition and analysis of threat intelligence feeds. The IT Security and Threat Management teams should work together as a well-balanced, cross functional team during the roll-out of any changes to existing of new processes and procedures. Otherwise, they should have separated responsibilities.

5. Strengthen capabilities in Analysis and Production

In threat intelligence, analysis and production represent the key enablers in understanding the cyber threat. Threat intelligence best-practices for analysis and production can be established at several levels of maturity. An organization should strive to advance capabilities through each successive level. We go into more detail in our whitepaper “Applying the Threat Intelligence Maturity Model to your organization”.[https://www.eclecticiq.com/resources/white-paper-threat-intelligence-maturity-model]

6. Bootstrap with threat intelligence platform technology

Threat Intelligence Platform (TIP) technologies have emerged to support common challenges with implementing or improving CTI capabilities. TIP provides an easy way of bootstrapping core workflows and processes as part of a successful threat management practice. When selecting a TIP for your organization, ensure that workflow functionality is available. By doing so, you can ensure that your TIP enables the centralization and consolidation of threat intelligence and the subsequent analysis, production, dissemination and integration of intelligence data into security controls, orchestration and other key processes.

7. Integrate technical indicators into security controls

Organizations commonly use technical indicators associated with intelligence to improve detection, prevention and response capabilities of security controls. This approach improves response times for threat detection and remediation.”

Do you have any closing words on how organizations should go about implementing these best practices?

Something we have seen on numerous occasions would be to set clear and realistic goals whereby you do not expect to grow from maturity 1 to maturity 5 within a year, nor create too much of an imbalance on different axes of our maturity model.[https://www.eclecticiq.com/resources/white-paper-threat-intelligence-maturity-model]

Take one step at a time, growing your threat intelligence practice in a manner visible to all stakeholders. Do so by having clear, predefined goals and results which will assist the entire organization on this journey, such that they are able to effectively process the intelligence you provide and thereby avoid disappointment in the result.

Cyber Security: A Paradigm Shift in IT Auditing

With the fast increase of cyber crime, companies are regularly being compromised by hackers. In many cases this is to extract value (money, information, etc.) from the company or damage the company and disrupt business processes. These cyber security incidents not only impact the business, but also impact the financial auditor. After all, the financial auditor verifies the veracity of the financial figures as presented in the annual report. This article is to help both financial auditors and IT auditors to take account of relevant cyber security risks and determine the impact on the financial statement. “Cyber in the Audit” provides a framework and guidance for a structured approach and risk-based decision making for assurance.

Introduction

Each year cyber crime is growing stronger and stronger. This is clearly demonstrated by the increase of cyber security incidents, for example the increasing occurrence of ransomware. In addition, we see a further maturing and professionalization of cyber criminals, for example, by considering the emergence of cyber-crime-as-a-service business models that they use. In 2013 the global costs associated with cyber crime were around 100 billion dollars, increasing in 2015 to around 400 billion dollars. Continuing to rise steeply, the cyber crime cost prediction for 2017 is 1 trillion dollars, increasing to a staggering 6 trillion dollars in 2021 globally ([CybV16]). This is serious.

At board level, this trend does not go unnoticed. As such, we see that companies’ boards include cyber security risks in their top five of most important business risks ([MTRe16]). After all, most companies are completely dependent on a continuously and properly operating IT environment. This not only applies to the availability of the IT environment, but it also applies to the confidentiality of sensitive data (e.g. intellectual property and data privacy) and the reliability of the (financial) data. A disruption of the confidentiality, integrity or availability of digital data has an increasing impact on the performance and operating income of the business. This is not limited to the classic office automation, but also needs consideration regarding automated production facilities (Industrial Internet of Things like SCADA environments) and consumer devices (e.g. healthcare and automotive), for example. It is estimated that a total of 6.4 billion devices will be connected by the end of 2016 ([Gart15]). That almost equals the amount of people on this planet. Because of this “hyper connectivity” trend, the traditional IT environment of companies is stretched further and further in the public internet through automated supply chains and sourcing partners. This makes adequately controlling the IT environment and its data inherently complex.

The Relevance of Cyber Security Risks for Financial Auditors

Cyber security risks not only impact the business, but also impact the financial auditor. After all, the financial auditor verifies the veracity of the financial figures as presented in the annual report. Just like the company itself, the financial auditor strongly relies on the continuity and reliability of automated data processing. Unsurprisingly, this has been part of the Dutch Civil Code (2:393.4) for decades: “At least he (the auditor) shall make mention of his findings about the reliability and continuity of computerized data processing.” ([DCC])

Traditionally, the financial auditor relies on the testing of so-called General IT Controls (GITCs). To thoroughly understand what IT auditors are actually testing, an example is provided in Figure 1 that illustrates the user access management process (ITIL in this example).

C-2016-3-Veen-01-klein

Figure 1. User access management process example (ITIL). [Click on the image for a larger image]

In Figure 1 one can identify the different roles in the access management process which execute IT controls, for example when a person requests access to the data in the IT environment. When conducting an IT audit, an IT auditor tests the controls in this process to determine their effectiveness in design and operation. If these controls are operating effectively, a financial auditor acquires additional reasonable assurance (on top of their own control testing in the financial processes) that the integrity of financial data is ensured.

However, here is the flip side: if we take a look at the approach that a cybercriminal would take to acquire access to the data in the IT environment, this is a very different process (see Figure 2).

C-2016-3-Veen-02-klein

Figure 2. Access management process of a cyber criminal. [Click on the image for a larger image]

It is clear that there are no controls in the process as shown above. Without any approval, registration and verification, a cyber criminal can acquire access to all IT applications and (financial) data in the IT environment. In fact, the cyber criminal bypasses all internal control measures implemented in the IT applications and IT infrastructure. In addition, a cyber criminal will cover (erase) its tracks to avoid being detected by audit logging & monitoring activities for example.

Financial auditors rely on the integrity of the data a cyber criminal can change. What does that mean for the integrity of financial data in this situation?

Acknowledging this risk, the PCAOB issued guidance on cyber security risks last year. Recently, the Netherlands Institute of Chartered Accountants (NBA) published a public management letter underpinning the importance of considering this risk when perform FSAs. Furthermore, regulators are increasingly focusing on cyber security risks in their sector such as the Dutch Central Bank (DNB) in the financial sector.

In 2014, the AICPA (American Institute of Certified Public Accountants) issued CAQ Alert #2014-3 addressing the cyber security topic in the context of the External Audit ([CAQ14], [AICP14]). Unfortunately, no framework or practical approach was given. In addition, AICPA wrongly depicts the “typical access path to systems” and translates this into the wrong conclusion that the order of focus should be from application down to database and operating system, leaving out the network (perimeter). Instead, the auditor should consider the IT objects on the access path from an IT user (employee, hacker, etc.) to the data.

Just one month ago, AICPA started the “Cybersecurity Initiative”, to develop criteria that will give management the ability to consistently describe its cyber risk management program, and related guidance to enable the CPA professional to provide independent assurance on the effectiveness of the program’s design via a report designed to meet the needs of a variety of potential users ([Tysi16], [Whit16], [AICP16]). At the time of writing, the two criteria documents are still in draft. The proposal comes with an extensive list of cyber security related controls. The long list of controls is not efficiently tailored for an FSA.

At the same time, the IFAC (International Federation of Accountants) states that the effect on financial statements of laws and regulations varies considerably. As such the risk of fines for non-compliance (NOCLAR) increases. Non-compliance with laws and regulations may result in fines, litigation or other consequences for the company that may have a material effect on the financial statements ([IFAC16a], [IFAC16b]).

Such laws and regulations are proposed by the EU and implemented by the EU member states as well strengthening Europe’s cyber resilience. In 2013 the Commission put forward a proposal for a Directive concerning measures to ensure a high common level of network and information security across the Union. The Directive on the security of network and information systems (the NIS Directive) was adopted by the European Parliament on 6 July 2016. The NIS Directive provides legal measures to boost the overall level of cyber security in the EU ([EuCo16]).

The example as described and the current developments in the audit, legal and regulatory domains all concur in addressing cyber security risks as a major concern in general. As such, a financial auditor needs to consider relevant cyber security risks when conducting a financial statement audit (FSA). In addition to the traditional testing of GITCs, a financial auditor needs to assess the likelihood of GITC flip side as well.

Now, how do we address cyber security risks in the audit in a practical way? This article describes a practical approach called “Cyber in the Audit” (CitA). The approach contains the different activities to carry out, guidance for test activities and how to deal with companies that are already compromised. Finally, we address the impact of cybersecurity findings on the FSA.

A Practical Approach to Cyber in the Audit

When incorporating Cyber in the Audit activities in the traditional IT audit it is important to align these new activities as much as possible with the existing approach. This section explains the position of CitA in relation to other FSA activities and the CitA process.

Position of Cyber in the Audit

As shown in Figure 3, the IT audit supports the financial audit by testing the automated key controls. Likewise, CitA supports the IT audit, by testing the cyber security measures which prevent/detect the bypassing of the IT application and infrastructure controls.

C-2016-3-Veen-03

Figure 3. Position of Cyber in the Audit.

The CitA testing is an extension of the regular GITC testing and uses the same approach when it comes to control testing. The difference mainly lies in the topics to address and the use of technical deep dives for additional fact finding.

The CitA Process Step by Step

The flow chart in Figure 4 provides a workflow to help identify, assess and process cyber security risks in order to determine the impact on your financial statement audit. In each stage of the process, it can be decided to stop further fact finding if enough assurance on the state of cyber security controls is acquired. Each of the phases is further explained in the following sections.

C-2016-3-Veen-04

Figure 4. CitA process flow.

Determine the Relevance of CitA for the Financial Statement Audit

As part of the familiar “Understanding of IT” activities, we need to acquire an understanding of how cyber security risks are determined and controlled in the environment by extending this activity with “Understanding of Cyber”. This should not only have a technical focus (e.g. implemented security in IT-systems), but also a focus on processes (e.g. response to a cyber security incident) and governance (e.g. who is steering/reporting and responsible for cyber security risks and measures). In addition, this approach is considered to be holistic, addressing topics like Legal & Compliance and Human Factors.

This is valuable input for further determining the need for further testing of any Cyber IT Controls and/or perform “deep dive” activities. In addition, this activity helps to identify weaknesses in the holistic cyber defense of the company, which can be reported through the management letter by way of concerns for business continuity or regulatory fines (e.g. data breach notification).

An overview of such topics are illustrated in Figure 5 which is based on KPMG’s Cyber Maturity Assessment model. Of course other models can be chosen as well, such as those from ISF or NIST.

C-2016-3-Veen-05-klein

Figure 5. KPMG’s Cyber Maturity Model. [Click on the image for a larger image]

The cyber risk profile can be determined using the information from this analysis. Such a risk profile is a combination of the cyber threats the company is facing and the dependency on adequate cyber defense, which can be determined based on the understanding of cyber, but in addition on “trigger” questions such as those proposed by the NBA Public management letter ([NBA16]).

For cyber threats it is important to consider sector specific cyber threats, a high(er) chance of insider threats, primary revenue generated online, cyber fines and if the company has already been breached.

For cyber dependency it is important to take account of topics such as financial audit reliance on IT systems, most important assets (crown jewels), high level of automation, integrated supply chain and regulatory compliance.

When combined on two axis, the company is “plotted” based on the cyber threats and dependencies as shown in Figure 6. Such “plotting” has a close relationship with the sector the company is in. Typically, companies in the financial sector share common threats and dependencies (stealing money / integrity), which in itself differ greatly from for example the Manufacturing sector (sabotage/availability).

C-2016-3-Veen-06

Figure 6. Cyber risk profile plotting.

This results in a relevance rating (for example low, medium, and high) based on the cyber dependencies and cyber threats the company is facing. The rating can also be used for further selection of Cyber IT Controls.

Testing Cyber Security Measures

The second phase consists of the actual testing of IT control measures specific to cyber security. Based on the just determined cyber risk profile, one or more Cyber IT Controls (CITCs) can be selected to test. The topics that the CITCs address are Cyber security Governance, Technical hardening and Cyber security Operations. These three topics cover the protect, detect and respond measures one would expect to be in place, for example: security monitoring, cyber incident response, security awareness and cloud security. The testing of these controls follows the exact same process as the testing of GITCs and can be seen as an extension to the default GITC set of controls.

As a result of ineffective CITCs, we know that security vulnerabilities may be present in the IT environment. As such, we need to select deep dive / fact finding topics which are applicable to the situation. Such deep dives can be, for example, Red Teaming, SAP Security, Phishing activities, SIEM reviews and Cloud Security Assessment, linking to the tested CITCs. The outcome of deep dives further clarify the impact of CITC deficiencies in terms of actual technical impact. Where CITCs may be ineffective, the technical implementation may not contain security vulnerabilities after all.

Breach Investigation

In the last phase, it is important to check if the company is aware of any cyber security breach having occurred in the financial year. In addition, tooling can help to determine this if the company is not able to provide evidence for this.

In the case the company is already aware that its IT environment has been hacked (or a hack is ongoing), or that this is discovered during the fact finding activities, the following steps are a guide to help determine the impact:

  1. Determine the threat actor. What party/group/person is conducting the hack? Is this a state sponsored advanced persistent threat (APT) or just a “script kiddie”?
    This gives an indication of the magnitude and persistence of the actor.
  2. Determine the actor’s motivation. What is the goal of this hacker (group)? Are they looking to steal money, copy intellectual property / sensitive data (e.g. stock exchange data) or sabotage the production?
    This gives an indication of a possible impairment and points towards, for example, intellectual property (IP), stolen cash/fraud and/or operational sabotage.
  3. Determine actions until now. What have they been doing before they were discovered? Are there any existing logging and monitoring capabilities to determine what actions this actor has already performed? Can the logging be trusted so that it has not been tampered with by this actor?
    This gives an indication of the damage incurred until now and it can be a source for answering the first two steps.

Consider involving digital forensic experts for the above mentioned process, to make sure that the correct actions and analysis is performed in such a way that this is still of value in court.

Determine the Impact of the CitA Findings on the FSA

The results of the CITC testing, deep dives and breach investigation are aggregated to determine potential FSA impact areas. This is fed into the financial audit process in order to determine the financial audit approach and choice regarding substantive testing, can-do/did-do analysis, etc. Table 1 can be used to determine how CitA findings relate to FSA impact categories.

C-2016-3-Veen-t01-klein

Table 1. CitA Impact category and guidance. [Click on the image for a larger image]

Conclusion

With the ability to bypass all (effective) IT control measures, hackers pose a serious risk to existing accounting and internal control. With the increasing automation of our business processes and digital data becoming the single truth, financial auditors need to take these risks into account in relation to the financial statement and annual reporting. Hence, IT auditors need to change their approach and include the seeking of facts in the technical cyber security domain of their auditees.

The “Cyber in the Audit” approach explains the steps to do this, taking into account the relevance of cyber security risks for a company, the existing cyber defense capabilities and operating effectiveness thereof, and the possible breaches in the company’s IT environment. In addition, a mapping of FSA impact categories with CitA findings provides guidance for the financial auditor translation.

Without wanting to add to the “cyber FUD” (fear, uncertainty and doubt) movement, it is crucial to understand what impact cyber security risks and incidents can have in our hyper connected digital world. Do not fear cyber security – embrace it.

References

[AICP14] AICPA, CAQ Alert #2014-3, March 21, 2014, https://www.aicpa.org/interestareas/centerforauditquality/newsandpublications/caqalerts/2014/downloadabledocuments/caqalert_2014_03.pdf

[AICP16] AICPA, AICPA Cybersecurity Initiative, 2016, http://www.aicpa.org/InterestAreas/FRC/AssuranceAdvisoryServices/Pages/AICPACybersecurityInitiative.aspx

[CAQ14]Center for Audit Quality, CAQ Alert #2014-03 – Cybersecurity and the External Audit, March 21, 2014, http://www.thecaq.org/caq-alert-2014-03-cybersecurity-and-external-audit?sfvrsn=2

[CybV16] Cybersecurity Ventures, Cybersecurity Market Report, Q3 2016, http://cybersecurityventures.com/cybersecurity-market-report/

[DCC] Dutch Civil Code, Book 2, Title 2.9, Section 2.9.9 Audit, http://www.dutchcivillaw.com/legislation/dcctitle2299aa.htm#sec299

[EuCo16] European Commission, The Directive on security of network and information systems (NIS Directive), 2016, https://ec.europa.eu/digital-single-market/en/network-and-information-security-nis-directive

[Gart15] Gartner, Gartner Says 6.4 Billion Connected “Things” Will Be in Use in 2016, Up 30 Percent From 2015 (press release), November 10, 2015, http://www.gartner.com/newsroom/id/3165317

[IFAC16a] IFAC, IAASB Amends Standards to Enhance Auditor Focus on Non-Compliance with Laws and Regulations (press release), October 5, 2016, http://www.ifac.org/news-events/2016-10/iaasb-amends-standards-enhance-auditor-focus-non-compliance-laws-and-regulations

[IFAC16b] IFAC, ISA 250 (Revised), Consideration of Laws and Regulations in an Audit of Financial Statements, 2016, http://www.ifac.org/publications-resources/isa-250-revised-consideration-laws-and-regulations-audit-financial-statements

[MTRe16] MT Rendement, Vijf belangrijke bedrijfsrisico’s in beeld, August 24, 2016, https://www.rendement.nl/nieuws/id18328-vijf-belangrijke-bedrijfsrisicos-in-beeld.html

[NBA16] NBA, Van hype naar aanpak – Publieke managementletter over cybersecurity, May 2016, https://www.nba.nl/Documents/Publicaties-downloads/2016/NBA_PML_Cyber_Security_(Mrt16).pdf

[Tysi16] K. Tysiac, New path proposed for CPAs in cyber risk management, Journal of Accountancy, September 19, 2016, http://www.journalofaccountancy.com/news/2016/sep/cyber-risk-management-201615199.html

[Whit16] T. Whitehouse, CAQ: Audit’s role in cyber-security exams, September 15, 2016, https://www.complianceweek.com/blogs/accounting-auditing-update/caq-stumps-for-auditor-role-in-cyber-security-exams

What’s Trending in Cyber Security?

To know when it will start raining, we check the weather app. To find out if there are any traffic jams, we check Google maps. To find out which companies are buzzing, we look at real-time stock quotes. But where should we go to get a real-time overview of cyber security news, threats and incidents from around the world?

This question is what drove us to develop the KPMG Cyber Trends Index. We explain our vision on cyber intelligence, highlight the currently existing gaps in the availability of comprehensive intelligence and discuss various ways in which we have seen the Cyber Trends Index being used in industry, where the users vary from technical security analysts to C-level executives.

The World of Cyber Security

Although cyber threats are real and their impact can be devastating, the media frequently portray a gloomy picture of cyber security, creating a culture of excessive fear. Yet, the reality is that cyber criminals are not unbeatable masterminds, and while they can inflict serious harm to your business, you can take actions to defend yourself against them ([Barl14]). You will never achieve 100% security, but by treating cyber security as a “business as usual” risk, your organization can be prepared to combat cybercrime. With relevant, comprehensive and timely cyber security intelligence, you will be able to achieve the goal of safeguarding the organization and making risk-based investments.

For many, however, this is not always easy to obtain. The world of cyber security tends to be cryptic due to its specialist character and technical jargon. In addition, it is difficult to distinguish between the primary, pressing issues and secondary, less severe occurrences ([Herm14]). Nevertheless, this cannot be an excuse to devolve the issue to specialist professionals. The board, management and employees must become more aware of the cyber security risk and their input and effort is needed to sustain the future of an organization. For that, they must know what is cooking in the cyber kitchen – in understandable language, graphics and interpretation.

“War is ninety percent information”

Napoleon Bonaparte

The famous French general did not even live in the information age, and yet he recognized that most of his military success was having the right information ([Hadn10]). When you are battling for a cyber-secure environment, obtaining real-time information can be of similar importance to your business. As management, you want to know what is happening around you. A breach at the neighbors could indicate that you can be next – while an upcoming technology could solve one of your long-lasting security problems or indicate another new threat. As a security specialist, you want to stay up-to-date on new vulnerabilities or events that you want to take action on.

In our vision, cyber intelligence and the insight that it brings are at the heart of Cyber Security 2.0. The vital need for real-time information about threats, incidents and trends for every organization is what drove us to develop the KPMG Cyber Trends Index (hereafter; CTI), and to make it freely available at https://cyber.kpmg.com.

The CTI provides this real-time overview of buzzing cyber news, trends, threats and incidents from around the world. One look at the Cyber Trends Index and you will be up-to-date on the latest information security developments, incidents and emerging threats. By providing actionable intelligence, the CTI allows you to take action on the latest threats, and make optimal use of new technologies and trends in the market. Having real-time threat information can make the difference in taking preventive actions when a new threat is emerging.

Using the Cyber Trends Index

We have seen various ways in which the Cyber Trends Index is being used in industry, where the users vary from technical security analysts to C-level executives.

At a large IT Service Provider, the CTI is used as a tool to update management and senior staff on developments within the cyber security space. A weekly to biweekly look is taken, primarily on the graphical representation of trends and threats. Because of the high degree of news correlation and visual trend information, this suits this organization’s need for abstract yet informative intelligence. The trend lines and points of view are the most prominently used features – giving an overview and interpretation of recent news.

”I visit the CTI pages frequently and recommend it in my constituency. Although I consult multiple resources each day, this dashboard stands out for its user friendliness, speed and accessibility for non-security professionals thanks to the simple yet adequate classification.”

Chief Security Officer of large IT Service Provider

In several Security Operations Centers (SOCs), the Cyber Trends Index has been spotted on one of the screens, next to news sources like CNN or Tweakers.[Tweakers.net is an online news outlet focused on technology.] They take frequent looks at the CTI, and mainly focus on the news feeds. Here, the timeliness of the news is the most used feature of the site. In addition, the possibility to read the full news article enables the SOC analyst to immediately dive into the details of any newly published vulnerability.

Within KPMG, the Cyber Trends Index is used to create overviews of recent developments in various industries. In order to be able to provide our clients with an up-to-date overview of risks and incidents that are specific to their sector, the CTI sector filtering feature is used.

While we recognize that the tool serves various audiences with different purposes, we do note that the desired level of abstraction can differ from one organization or person to another. Obviously, customized threat intelligence with a personal dashboard will provide a closer fit to the individual needs than a publicly available website can offer. Nonetheless, for many organizations the CTI already provides a reasonable degree of relevance. Only if the value of abstract – but real-time – threat intelligence comes to the surface, an investment in custom tooling should be considered.

C-2016-3-Timmerman-01-klein

Cyber Security 2.0

Like we noted earlier, Cyber Security 2.0 is the world of intelligence-based cyber risk management. It is the world in which adequate management of cyber security risks not only helps in being in control, but will also induce strategic advantage. In this world, technology is a key factor – and innovation in technology grows faster than one can imagine. The Netherlands has even become one of Europe’s leading technology hubs ([Egu15]). To harness this power, KPMG in the Netherlands is creating ecosystems with start-ups and young tech companies in order to move the industry forward with intelligence-based solutions.

The CTI has been developed together with the Dutch technology firm Owlin, a young company aimed at developing real-time data analysis solutions. The combination of KPMG’s cyber security insights and Owlin’s proprietary algorithms yields a tool that continuously tracks and scores news patterns from public sources (such as news sites, press releases, reports, magazines, forums, etc.) and presents them in an understandable manner.

C-2016-3-Timmerman-02-klein

Technology also induces the always-on mentality, where everyone wants to be up-to-date anywhere and anytime. The newly released Cyber News and Trends app (iOS/Android), the mobile app version of the Cyber Trends Index, contains an additional notification feature that is being used by those who want to have a real-time warning in case of a disruptive event. The Cyber Security 2.0 world, where threat intelligence is available cross-platform and provided by dedicated algorithms, is taking shape.

Conclusion

Technology will change as the (cyber) world keeps on developing: what is protected today might not be secure tomorrow. Therefore, it is essential to stay up-to-date with the latest cyber news, threats and incidents from around the world and learn how to embrace and act upon them in your daily operations. With this step into threat intelligence tooling, we are unlocking the potential of big data analysis on news articles for comprehensive cyber insights. By means of the openly accessible Cyber Trends Index, we contribute to making the world of cyber security more tangible and easier to understand – and we provide you a way to stay ahead of the game.

References

[Barl14] S. Barlock, T. Buffomantie and F. Rica, Cyber security: it’s not just about technology. KPMG, 2014.

[Egus15] C. Egusa and S. Cohen, The Netherlands: A Look At The World’s High-Tech Startup Capital, TechCrunch, 2015.

[Hadn10] C. Hadnagy, Social Engineering and Nonverbal Behavior Set, Wiley, 2010.

[Herm14] J. Hermans, Cyber security, a theme for the boardroom, KPMG, 2014.

SAP Landscape Security: Three Common Misconceptions

SAP is an attractive target for cyber attacks for both malicious insiders and external actors, aiming to attack critical business processes that are facilitated by SAP. Threats to SAP systems go well beyond rogue privileged SAP users, but extend to insiders within the company, corporate spies, cyber criminals and hostile intelligence services or nation states. Given such high-profile threats and a wealth of identified SAP vulnerabilities, it is clear that an integral and thorough approach is desirable to adequately secure your SAP landscape. However, there are still many misconceptions around SAP Landscape protection.

Introduction

The most critical industrial, financial and core infrastructure systems in the world are controlled by SAP systems. SAP integrates and connects all aspects of your business processes and thereby often stores high value data such as finance, sales and personnel related data.

Traditionally, SAP security was mainly focused on authorization management and the segregation of duties amongst business users. Therefore, many SAP professionals refer to the term “security” as the process of creating and managing roles and profiles to restrict user activities over business information. Of course, these controls are important to the overall level of security within an SAP landscape, but this picture of SAP security is very limited. There are several other security threats that are often not properly addressed, also considering that SAP keeps on adding new functionality such as their future application platform HANA ([Scho15]). Additionally, security vulnerabilities in the technological components of existing SAP infrastructures and inadequate configurations are often not identified by our customers.

SAP is an attractive target for cyber attacks for both malicious insiders and external actors, aiming to attack critical business processes that are facilitated by SAP. Threats to SAP systems go well beyond rogue privileged SAP users, but extend to insiders within the company, corporate spies, cyber criminals and hostile intelligence services or nation states. Given such high-profile threats and a wealth of identified SAP vulnerabilities, it is clear that an integral and thorough approach is desirable to adequately secure your SAP landscape. However, there are still many misconceptions around SAP Landscape protection. We will discuss the three most common ones.

Common Misconceptions in Protecting the SAP Landscape

Misconception 1: Protection is about systems aka “Securing the systems that actually store the critical data is sufficient”

Many organizations perceive that security is about systems. They reason that if the systems that actually store the data are secured, the risk of unauthorized data modification or confidential information leakage is minimal. A good example of this behavior can be observed in the scope of many financial statement audits. Cyber security firms are usually asked to only assess the adequacy and security of (SAP) systems and related databases that actually process and store financially relevant data. However, to quote SAP itself: “The pace of business is as fast as it has ever been, and it is only continuing to accelerate. To compete and win in today’s market reality, companies must fundamentally transform their business models, processes and the IT operations that support both” ([SAP16b]). Therefore, SAP either is or will be at the heart of an organization’s business and its most critical processes. As such many non-SAP IT systems, employees and third parties need to have access, creating a myriad of opportunities to misuse any of these logical access paths. Have you ever thought about how many interfaces and connections a single SAP instance could have? It is not about systems, it is about protecting the chain. What is the use of protecting the system that actually holds the data while other, connected, systems that can access that data are insecure? In our experience, it is especially the security of the SAP Solution Manager that is often overlooked, as this particular SAP system can act as a gateway to access any other connected SAP system, including the most critical ones. During our technical assessments, we often see that the Solution Manager can be easily compromised, thereby directly impairing the security of the systems that actually store the data.

A root cause of this problem can be isolated thinking. Traditionally, an important distinction can be observed in administrating and securing SAP systems: the distinction between the SAP Basis and the SAP Security team. In a typical organization that runs SAP, the Basis is responsible for keeping the systems and interfaces running, whereas the Security team deals with access authorizations and Governance, Risk & Compliance (GRC) in the application itself. As a result, neither of the two teams takes responsibility for the security of the operating system, network or database nor do they focus on securing the chain. There needs to be responsibility for connecting all the separate worlds involved in securing the chain and get the specialists thinking about the actual risks in the landscape.

Misconception 2: Security is about compliance, or: “Patch frequently and implement the SAP security baseline to protect against all attacks”

Mid 2014, SAP finally released their security baseline template. A document that they should have developed long before, just like any other large software company or IT vendor. The document is comprehensive and covers almost all aspects of securing a SAP Landscape, the current version (1.9) has 149 pages ([SAP16a]). However, we have not seen a single organization that could implement the entire baseline and have their SAP infrastructure comply with the described controls. The same holds true for the security update process. Again, SAP has released a comprehensive document to describe how to apply patches and individual security notes. But if you need 93 slides to explain a patching process, you cannot expect all organizations to fully implement such a process, let alone to operate it effectively.

Apart from this, no organization can keep up with the pace with which new (security) patches are released, averaging about forty per month ([SAP15]). SAP recognized this as well. If you are not installing each and every individual note, they recommend to at least install the support packages frequently, as these combine multiple critical (security) patches. However, since the component release 610 in 2001, SAP has released 11 other major components and a staggering amount of 287 support package levels. Installing a support package or upgrading to a newer component release often requires thorough testing and days of downtime, which can have a major impact on business continuity. SAP’s patch release process simply overwhelms the support teams, allowing critical security holes to remain or be created.

But even if you are able to keep up with the patch frequency and even if your SAP Landscape complies with the baseline, you will still not be protected against one of the most common and prevalent attack types: impersonation ([Micr14]). During an impersonation attack, (remote) adversaries trick end-users into clicking on a link or installing infected software. Once the computer of the end-user is infected, the adversary can remotely control the system, often without being noticed. If the infected end-user has high privileges in the SAP system (i.e. many organization have implemented Single-Sign On for SAP, or else the adversary just needs to install a keylogger and wait for the victim to login to the SAP GUI), the adversary can modify or obtain unauthorized access to critical data. Even if the infected end-user only has limited privileges, a successful attack can still be launched. With information gathered online, the adversary can identify which users within an organization have specific access rights in the SAP system. For example, to bypass the segregation of duties to modify a bank account number it is required to identify the requester and approver. Then spear phishing attacks are typically used by attackers to both impersonate the requester and approver and have the malicious request approved without consent of the victims. Attack paths such as these are extremely prevalent in practice: because of their valuable data, SAP systems are subject to highly targeted attacks, against which no tenable compliance rules can be created.

Misconception 3: Security and vulnerability monitoring is the holy grail aka “Let’s collect all logs and find attackers”

Some organizations have recognized that the threats on infrastructure and application levels are becoming more critical and may provide attackers with unauthorized access to their most valuable data and business processes. Therefore, they consider it imperative to develop a broad, enterprise-wide view on SAP security, including security and threat monitoring of their SAP Landscape. While we encourage this development, we have seen many organizations struggle with implementing a good monitoring strategy and corresponding threat management. For example, the strategy “collect all logs first and then determine which abuse cases to monitor” hardly ever works. Far too often, organizations end up with many false positive alerts, queries to the SIEM system that take ages and standard, vendor provided, use cases that do not detect any sophisticated or targeted attacks. You still end up with your data stolen or your system left out-of-business if you are hit by a carefully planned targeted attack. Monitoring should be implemented based on behavior and scenarios: try not to detect the use of exploits in your SAP environment but focus on attack vectors that revolve around impersonation. Many of the prevalent attacks are not sophisticated at all, but attackers focus on exploiting design weaknesses as these are considered far more reliable (and cheaper) attack vectors than exploiting vulnerabilities ([Abra16]).

And even if you have good monitoring in place, it still acts as a signaling function: only very few monitoring tools will prevent attacks. It is therefore the balance between preventive, detective and responsive measures that organizations should seek. Why spend millions on a detection system if you do not have a communication plan ready?

Conclusion: Improving the Digital Resilience of Your SAP Landscape

The three misconceptions above share a common element: SAP security is a complicated process that may not be covered by focusing on individual aspects or compliance. Although these individual aspects are an attractive way of limiting scope and obtaining a sense of control, all of them on their own only yield a false sense of security. Considering the critical nature of the SAP landscape of any organization that has one, these approaches are just not sufficient.

Securing the SAP landscape can only be done by taking an attacker’s perspective. What, exactly, is the organization trying to defend itself against? What type of data is so sensitive that it may not be leaked, and which business processes must be protected from fraud? Which internal and external actors can be identified that present these threats, and what methods do they have at their disposal? With these questions in hand, one may draft a scenario based approach in which an SAP landscape may be assessed from the attacker’s perspective, and answer questions about whether the cyber incidents that the organizations is most afraid of can actually happen in practice.

References

[Abra16] J. Abraham, How to Dramatically Improve Corporate IT Security without Spending Millions, Praetorian, July 2016.

[Micr14] Microsoft, Mitigating Pass-the-Hash and Other Credential Theft, version 2, 2014.

[SAP15] SAP, Security Patch Process. Implementing SAP Security Notes: Tools and Best Practices, November 2015.

[SAP16a] SAP, Security Baseline template version 1.9, August 2016.

[SAP16b] SAP, The Digital Oil and Gas Company, 2016, go.sap.com/solution/industry/oil-gas.html

[Scho15] T. Schouten and J. Stölting, Security Challenges Associated With SAP HANA, Compact 2015/4.

A Journey into the Clouds

Software as a Service (SaaS) solutions have been around for years and organizations are starting to move on a larger scale to business software through the SaaS model. Based on a business case at one of the largest global organizations we provide insight into what the impact of adopting SaaS is on the procurement process. More specifically we will present an approach on how various functions can and should work together to ensure SaaS contracting is done in a compliant and secure manner.

The Old Days of Contracting IT Solutions No Longer Suffice

Once upon a time the world of IT contracting followed a clear process. A business analyst was assigned to identify the business needs and requirements and the IT department would function as a bridge between the business and IT vendors. IT vendors were more than pleased to build whatever was needed; the sky was the limit. The business was in a strong position to negotiate, often playing off the potential vendors against each other who were standing in line to win the project. Custom-made solutions or customized standard solutions where the standard. This approach to contracting also came with a price as custom solutions tend to be more expensive than standard solutions (assuming a solution is delivered to the business). Pricing of the solution seemed straightforward since during the contract phase the license fee was negotiated by the Procurement team and was agreed for a longer period. Scaling down the license fee, however, appeared to be more challenging as time went by.

Naturally, the above description is somewhat exaggerated. We do however believe that there is truth in this approach and that this has resulted in increased business attention for SaaS solutions.

Business Users Start to Believe in SaaS Solutions

The business case we are using, a globally operating company similar to other companies across various industries, quickly recognized the benefits of using SaaS. SaaS is one of the cloud computing deployment models (in addition to IaaS and PaaS), in which various third-party vendors develop applications and make them available to customers over the Internet. SaaS solutions are typically offered through a subscription based model in a multi-tenant environment.

There could be different reasons for turning to SaaS; companies themselves often mention cost savings, while users primarily recognize faster innovation, greater business agility and improved collaboration. What users often find as a benefit, for example frequent and unnoticeable automatic software updates, becomes a headache for organizations that traditionally used to dictate times and scopes of updates to their software. Another example would be collaboration capabilities of certain SaaS solutions that can provide additional insights into companies by analyzing all the information and data that SaaS users can store and share within applications. While being an excellent information sharing and analysis platform for end users, collaboration SaaS increases the risks of information being “over-shared” – either by end users themselves, or by the SaaS provider that analyses the data in the first place.

The rise of cloud computing in the consumer domain (think about such players as MS Office 365, Salesforce, Workday) has raised user expectation about the types of services that IT departments deliver and the speed of delivery. An employee’s legitimate desire to start using these tools to improve the quality of their work will often be backed up by a solid business case explaining how the company can benefit from the new SaaS service. Yet, many organizations are still unable to keep up with these expectations, citing various reasons ranging from loss of control over organizational data to different legal jurisdictions for data origin, storage and processing. As a result, deployment of SaaS services is being delayed or rejected, and individual employees and departments choose to bring cloud services into the organization by themselves, circumventing the IT department. This situation is often described by a well-known term in the IT world – Shadow IT.

In order to gain an understanding of the degree of cloud Shadow IT used at the company, meaning the exact number of cloud applications being used by company employees and the associated business risks, a cloud discovery scan was performed by using software from a Cloud Access Security Broker (CASB). The analysis revealed that the actual number of cloud services in use was 5 to 10 times higher than the company estimated. The results served as an additional driver to start developing high-level Information Security requirements and guidance in adopting cloud solutions.

Shadow IT is Increasing the Need for SaaS Orchestration

In the cloud world, shadow IT is any software that employees acquire directly from cloud service providers, circumventing the internal IT department and not following approved processes to deploy IT services. This can potentially reduce the organization’s level of security and result in the following risks for the business:

  • Data Risks. Valuable business data may reside in cloud environments that are not secured and controlled by the organization’s data governance standards. This can result in sensitive data being shared with unwanted parties or accessed, modified or removed by unauthorized users;
  • Compliance Risks. Business or privacy-sensitive data may be transferred or stored in locations with different laws and regulations, which can result in regulatory and security non-compliance incidents;
  • Assurance Risks. There are many different assurance standards regarding cloud computing. However, there is currently no unified standard and many cloud service providers do not share enough information and do not allow customers to conduct audits. This creates challenges on obtaining assurance on data and processes.

To mitigate the risk of cloud-based shadow IT organizations need to stay aware of the full scope of cloud services in use by their employees, which nowadays can be done with the help of next generation firewalls and cloud access security brokers (CASBs).

Business Drives the Need for a Redesign of the Procurement Process

The company in our business case predicted a significant growth in SaaS usage in the coming years – this trend is also supported by their “Cloud First” strategy. This triggered a new initiative within the company of reviewing the older process of contracting and procuring IT services. The old process that was followed by Procurement for onboarding IT solutions was not optimally designed to allow for an efficient and effective way of contracting cloud solutions, more specifically:

  • The process lacked speed (with the average time to contract being 90 days) and agility to deal with the growing demand of SaaS requests from the business.
  • Information Security, Legal and Regulatory risks were not properly reflected in the procurement process.
  • The process aimed at contracting SaaS solutions on the company’s own terms and conditions (T&Cs), whereas the SaaS concept assumes a standard offering with standard (supplier) terms and conditions and limited opportunities for negotiation. The company would notice how many SaaS vendors reject the company’s contractual clauses and instead propose their standard T&Cs.

Together, these issues have often led to the situation where a substantial amount of valuable time and expense was being spent on low risk and low commercial value contracts. As supported by audit opinions, there was a clear need to improve the company’s capability to introduce SaaS in the business.

While Taking a Conscious Risk Based Decision

To address the growing issue with SaaS contracting and procurement, the Procurement team, together with a multi-functional team of specialists, has developed a new uniform SaaS Contracting process for the company. The key idea was to not only renew the overall process during a procedure writing exercise, but also to support it with a practical implementation. This would translate later in the project into a tool that we refer to in this article as the Cloud Adoption Tool.

The company in our business case wanted to enable the acceptance of the Terms & Conditions (T&Cs) of SaaS Providers, as this accelerates execution of SaaS contracts. They would accept T&Cs when specific contract requirements are known, verified and met. As such, the company needed to know when adding a side letter or using a company contract is the only way to mitigate the residual risk of adopting a SaaS solution.

Renewed Procurement Process for SaaS Solutions

The purpose of the new SaaS procurement process was to provide staff with guidance in selecting an appropriate contracting method for a SaaS solution and to provide the requirements that need to be validated in the terms & conditions of the SaaS vendor, demanded as an attachment to the vendor’s T&Cs, or included in the company specific contract (if such an option would be possible with a cloud vendor). Furthermore, the risks could be deemed to be too large and therefore reject a SaaS proposal irrespective of the contractual approach.

With our support, the company came up with an improved process that focuses on integration between the business (through the representative for the business request), Information Security (through the inclusion of the data oriented risk assessments), Legal, Compliance and Procurement. Key here for Procurement would be to rely and reuse as many existing assessments performed by the different functions as possible, instead of designing a completely new assessment. In other words; Procurement staff should not take over any of the responsibilities of the other functions involved, but rather rely on the work already done by those functions.

Summarized, the process included the following steps:

  • defining the starting triggers and the input for the process;
  • performing the analysis and segmentation;
  • performing the requirements checks based on the segmentation outcomes;
  • performing any required follow-up actions and signing the contract.

Detailed guidance and responsibilities have been assigned for the above steps. Supporting their execution would be the key purpose of the CIRA tool.

Cloud Adoption Tool

The Cloud Adoption Tool is aimed at consolidating various requirements from different risk assessments within the company, such as a Business Impact Assessment and Legal assessments that will determine a certain profile (or segmentation) for a particular SaaS case. The segmentation will then suggest the contracting approach when moving the company’s information assets to the cloud and a concrete set of contractual requirements to be demanded from a cloud provider prior to signing the contract. These requirements are to be checked against a supplier’s T&Cs, or to be included in a side letter or the company’s own specific contract.

It is important to highlight the following key features:

  • The Tool identifies information security, legal and regulatory risks, as well as details of the SaaS solution in the early stages of the procurement process.
  • The Tool supports an enhanced segmentation model to help determine the contract profile:
    • Operational contract, when the company accepts the supplier’s T&Cs.
    • Tactical contract, when the company accepts the supplier’s T&Cs and requires the signing of a side letter in addition.
    • Strategic contract, when the company wants to sign a company-specific contract.
  • The Tool is based on Subject Matter Expert (SME) support and sign off. Each SME has provided advice to the Tool and supports its aims. This is a, not to be underestimated, necessity. With the approach developed, SMEs that would normally be consulted in a procurement process have now given their consent for pre-defined scenarios assuming the requirements are met in the Supplier T&Cs.

Figure 1 represents, at a high level, the SaaS procurement process and results of the Tool.

C-2016-3-Meijer-01-klein

Figure 1. Simplified representation of Cloud Adoption Tool. [Click on the image for a larger image]

In one of the largest global companies, which serves as our business case, changing such a fundamental process and way of thinking requires more than a tool. Key elements during the release of the Tool include, amongst others:

  • engaged and informed key stakeholders and support obtained from senior executive within the company;
  • prepared guidance and manuals;
  • issued communications;
  • training provided to the Procurement staff on the SaaS contracting process;
  • an initial trial period of six months;
  • SMEs engaged to review the design, implementation and “run & maintain” phases of the revised process.

In a World That Continuously Matures and Evolves

The world has not stopped revolving. SaaS vendors recognize the increased demand from potential professional clients for insight into how their solution is provided. This has resulted in a growth in (specific) certification programs and security products, as well as actual assurance products such as SOC 1 and SOC 2 reports. SaaS vendors no longer suffice with “being in control”, but are required to actually “show control” as well. The first movers, such as our business case, might cause frustration with the SaaS vendors. Typically, the smaller players are not used to difficult questions around security and compliance, whereas the larger players start to employ security officers with high commercial skills to sell the security of the solution. We do believe that the SaaS market is slowly evolving as well and is moving to a state of maturity when it comes to security and compliancy. However, we should be aware of SaaS vendors that try to benefit through dodgy certifications and “audit reports”. But this is not limited to the SaaS market and has been around in security for a long time (e.g. through cheap ISO 27k certificates).

Where the SaaS vendor evolves, the IT department cannot remain behind. IT teams can experience this as a struggle; how to stay relevant in a model where the business can directly procure an IT solution? Especially where SaaS solutions are used in a professional environment, e.g. to support a business process, the IT department needs to step in and ensure the SaaS solution is properly embedded in its portfolio. Eventually, users will go to the IT Helpdesk if they cannot log in to the SaaS application, or when an interface with a “legacy” system no longer functions. And when the user query concerns shadow IT, then the user might not even realize this; to him it is all IT. It is important that in the fast moving SaaS world, the IT department is capable of quickly adding a SaaS solution to its portfolio. A clear view on architectural principles and minimum requirements is necessary for this. In fact, this should also be a part of the overall contracting model: it is not only about security or compliancy risks, it is also about the ability to support and operationalize the SaaS solution. This is actually one of the next steps identified in the business case we used as the basis for this article and this truly combines all relevant functions and teams involved in procuring, implementing and supporting an IT solution. Which happens to be SaaS …

Where Did This Journey Take Us and Where to Go Next?

The straightforward process supported by the Cloud Adoption Tool makes it a simple and consistent process for procuring SaaS solutions where relevant risks are identified early in the procurement stage and that triggers an appropriate contracting approach to reflect the degree of risk.

After running the process and the Cloud Adoption Tool for around 9 months the company is currently evaluating the overall process to continue building on its strengths and address weaknesses. Initially designed to be a simple and straightforward tool that can align different parties within the company (such as Procurement, IT, business partners, Legal, and various subject matter experts) on steps and requirements for SaaS onboarding, it seemed to achieve this goal. During the evaluation workshop it has been noted by various stakeholders that the Tool:

  • helps to understand business needs in procuring a particular SaaS solution (scope of use, demand, requirements);
  • simplifies and speeds up the contracting process by enabling preparation of a custom, fit for purpose contract to source SaaS or entering into a contract on the supplier’s terms;
  • increases compliance by identifying the actions required prior to signing the contract and verifying that all key stakeholders are involved in the process;
  • streamlines communication between stakeholders that use the same process and forms from the Tool.

Table 1 provides a summary of problems resolved with introduction of the process and tooling at the company.

C-2016-3-Meijer-t01-klein

Table 1. Cloud contracting process resolved problems. [Click on the image for a larger image]

The SaaS contracting process has so far been focused on risks from various angles. Next steps could include the integration of other specialist areas or functions. Where the Tool in its current form provides requirements to check the Terms & Conditions of the SaaS vendor, in a next phase the Tool could also provide requirements to check the actual solution:

  • IT Architecture to provide requirements for the design of the SaaS solution;
  • the Service and Support function to provide requirements for bringing the SaaS solution into its support processes;
  • business representatives to provide requirements for actual functionalities that need to be provided by the SaaS solution.

Overall, the approach developed together with the company in our business case, has shown to be relevant and add value, while taking a risk based approach. What is learned from this case should serve as food for thought in any organization that deals with an increasing push for SaaS solutions while dealing with traditional (risk) functions.

What Are You Doing to Keep My Data Safe?

Consumers are becoming increasingly savvy and aware of the sensitivity of their data, underscored by numerous high profile data breaches over the past 5–10 years. With this increased awareness has come an implicit expectation that companies with whom they do business and whose services they consume are good stewards of this data and provide adequate protection. At the same time, in many industries there is a growing sense of cyber fatigue (appetite for seemingly perpetual growth in cybersecurity spending with little demonstrable direct return on investment). This article will explore the dichotomy between intrinsic consumer expectations and concerns around protection of their data and corporate investment trends around cybersecurity, with the objective of outlining a risk-based approach to prioritization of cyber risk mitigation.

Introduction

Data is being created at a volume unparalleled in human history. The explosion of new technology and digital media such as smart phones and tablets has led to an exponential rise in the amount of data generated. In fact, IBM has estimated that by 2020 Web sites, smart phones, and sensors that comprise in part the Internet of Things will generate 44 zetabytes of data ([NYT16]). Because the collection and use of data has become so pervasive, many consumers simply assume that it will be protected from loss, damage or unauthorized exposure (often, even at a level higher than they themselves would protect the data).

This is one of the reasons that data breaches get as much publicity as they do – beyond the sensationalistic news stories, data breaches represent an implicit (and sometimes) explicit “breach” of trust between the business and its customers. Once lost, this trust can be difficult to rebuild and often requires a number of years, many millions of dollars, potentially senior leadership changes, and increased regulatory scrutiny for a period of time.

At the same time, there has been a substantial downgrade in security spending by companies across industries over the past several years: 49% of business respondents in KPMG’s 2016 Consumer Loss Barometer said they did not use capital funds to invest in Cybersecurity during the last 12 months ([KPMG16]).

Because cybersecurity is a risk-based discipline and is very difficult for many security leaders to effectively tie spending back to either a return on investment or a tangible risk mitigation, there is a growing sense of unrest and frustration at the executive level that money spent to reduce cyber risk may be a misplaced investment. As will be demonstrated in this article though, while there may be opportunities for companies to be more thoughtful on the way they leverage limited resources around cybersecurity (spending and human resources), those that scale back investment in cybersecurity infrastructure, resources, and governance may do so at their own peril.

Evolving Customer Demands and Expectations

Defining the Consumer

C-2016-3-Lucas-picto1 Connected – Twitter, Facebook, Pinterest, & Snapchat

C-2016-3-Lucas-picto2 Conscious – socially, ethically, & environmentally aware

C-2016-3-Lucas-picto3 Empowered – many outlets to express their opinion

C-2016-3-Lucas-picto4 Individual – expect a personalized experience

C-2016-3-Lucas-picto5 Vulnerable – more exposed to risk

C-2016-3-Lucas-picto6 Informed – unlimited information at their fingertips & are constantly looking for more

In today’s connected society, consumers are socially-empowered, tech-savvy, connected, and mobile. Consumers have either adapted their lifestyles to modern technology or were simply born into the digitally connected world. Nowhere is this more apparent than in the realms of social media, mobile and the Internet of Things.

Social Media

Today’s consumer uses social media sites as a means to interact with other people, share pictures and files, consume news, listen to music, and research and purchase products. This interaction with other people lulls consumers into a false sense of security. “Everyone is connected,” so it is a reasonable thing to do.

That said, approximately 64% of consumers reported that they are concerned that their social media platform will be hacked, and consumers also reported that they would switch their social media provider if their social media account was hacked provided that an option existed ([KPMG16]). Yet, 43% of consumers reported accepting friend requests from others who they do not know ([Forr16]). This behavior clearly demonstrates that the consumer does not fully understand the risks associated with this technology.

Mobile

In today’s world there is also a need for consumers to be mobile which has resulted in the expectation of public Wi-Fi connections in public areas such as airports, restaurants, and hotels. Approximately 68% of consumers are concerned that personal information may be stolen while using a public Wi-Fi network ([KPMG16]). Consumers expect the provider of the public connection to provide a secure connection, which means a password-protected connection. Nevertheless, with the need to be connected, consumers will connect to unsecure, public Wi-Fi connections. After all, others are connected so it must be okay.

Internet of Things (IoT)

As the IoT market evolves and competition increases, the IoT consumer will be more concerned with cybersecurity. With the rapid speed-to-market of some of these products, consumers need to understand and trust that the IoT companies are securing not only the devices but the infrastructure and that the data that is being collected and analyzed is protected. Consumer trust should be a result of the IoT provider’s ability to secure its ecosystem rather than the result of brand dominance. Consumers reported that 61% would use more IoT devices if they had greater confidence in the IoT ecosystem’s security ([KPMG16]).

C-2016-3-Lucas-01

Figure 1. IoT core components.

Forgive and Forget?

Over 60% of consumers are concerned that use of social media, mobile apps, or interconnected products will be hacked exposing their information. Nevertheless, consumers may be willing to forgive and forget provided companies cover losses promptly, communicate the breach transparently, and demonstrate that proactive steps are being taken to prevent future security breaches ([KPMG16]). According to KPMG ([KPMG16]), while most consumers acutely feel the sting of a data breach, not all consumers are deterred by the notion of the inevitable breaches they face, whether while doing personal banking, using their mobile phone, or shopping:

  • Banking: In the event that a customer’s personal bank disclosed a data loss from a cyber breach and then remediated the problem, two thirds (67%) of banking customers say the time and effort needed to switch banking providers is a significant contributor to their willingness to stay.
  • Mobile: In the event a breach reveals a carrier is sharing data/encryption technologies with the U.S. government about half (49%) would not switch carriers. Additionally, 82% of consumers would not pay moderately more to switch to a provider that guaranteed it would not collect their PII, even if their current provider suffered a breach.
  • Retail: In the event a big box retailer is hacked, compromising personal information, but soon thereafter addresses the security flaws, eight out of 10 surveyed (81%) would still feel comfortable shopping at that store.

This insight and similar research should directly factor into cybersecurity risk modeling and risk-driven decision making by Information Security and business executives.

The Evolution of Corporate Cybersecurity and Onset of Cyber Fatigue

Board members and senior executives are under a constant barrage of information from internal parties (IT and information security leadership), peers, and news sources about the complexity of cybersecurity threats. While this focus is in many cases well-founded, there has undoubtedly been a pendulum shift over the past several years as audit committees, board members, and CEOs seek to avoid becoming the next major breach (and perform appropriate due diligence to help alleviate concerns around personal liability). In fact,

  • 88% of Boards say that their strategic risk register includes a cybersecurity risk category ([GOVU16]).
  • 74% of Audit Committees plan to devote more or significantly more agenda time to cybersecurity, including data privacy and protection of intellectual property. 55% of Audit Committees think more agenda time should be devoted to cybersecurity ([KPMG15a]).
  • 29% of CEOs list cybersecurity as the issue that has the biggest impact on their company today ([KPMG15b]).

Despite the attention cybersecurity risk is receiving right now, many IT and information security leaders still struggle to effectively communicate cybersecurity risks in clear and concise business terms, in a way that is easily digestible by executive management. For example, it is not an uncommon practice for a Chief Information Officer (CIO) to present to the audit committee and provide assurance around the company’s security posture as conveyed through the fact that a new Intrusion Prevention System (IPS) has been purchased and installed, or that the number of systems missing patches has dropped by X%. This disconnect in the ability to effectively assess and articulate cybersecurity risk has resulted in many cases in a growing sense of cyber fatigue.

Though by no means a comprehensive list, several common indicators of cyber fatigue include:

  • Consistent, year-over-year double-digit, compound annual growth in cyber budgets over the last five years with limited results
  • Ever-increasing depth and breadth of executive and board briefings on cyber issues
  • Continual net addition of cyber-related technologies – with few, if any, being retired
  • Frustration with a lack of clear correlation between cybersecurity spending and risk reduction

What Does This Dichotomy Mean for Businesses and Consumers?

The simple fact that such a high percentage of consumers recognize loss or exposure of their sensitive information as an area of concern support the fact that this topic remains top of mind, even where there may be a disconnect between concern and behavior. At the same time, though spending on cyber initiatives by companies has grown steadily over the past few years, a growing sense of cyber fatigue may result in budgets being slashed indiscriminately, which increases the likelihood of unauthorized exposure or access to this information by employees, third parties, or external threat actors. While the majority – 73 percent ([Expe15]) – of companies acknowledge that they are likely to experience a data breach, this is not an excuse for neglect or, worse, abandonment.

What Should Business Leaders Do to Stay Ahead of the Curve?

Cybersecurity requires ongoing vigilance and a continual refinement of business operations. Table 1 describes some common mistakes.

C-2016-3-Lucas-t01-klein

Table 1. Common security pitfalls. [Click on the image for a larger image]

Ultimately, information security and business leaders need to focus on a practical, risk-based approach to information security that helps to ensure limited resources (both people and spending) are allocated as efficiently and effectively as possible. There are seven key things that companies should strongly consider when developing or transforming their cybersecurity program into a risk-based, customer-centric model to avoid common mistakes such as those above:

  1. Listen to the voice of the customer. Begin with a customer-centric perspective, which can help to both build trust and rapport with your customer base, and help ensure more balanced security spending.
  2. Understand your data. A risk-based approach to security investment relies on a clear understanding of what data is collected, processed, stored and transmitted internally and to third parties and customers. Furthermore, it is important to maintain this inventory on an ongoing basis based on changes to the business, technical and regulatory environment.
  3. Make measured investments in cyber capabilities based on risk. Many organizations try to apply “one size fits all” solutions, and as a result, often drastically overinvest or underinvest in the areas of highest risk.
  4. Regularly measure the effectiveness of your security investments. Investments in security, as with any other business discipline, must be quantified, tracked, measured and reported to stakeholders.
  5. Develop/align the right cyber risk management model. Thoughtful consideration around a tailored cyber risk management model can substantially streamline and facilitate management of cyber risks.
  6. Continually update your model to reflect emerging threats. The cyber threat landscape is continually changing, with threat actors continually refining their toolkits. It is important that all companies evaluate and develop controls and procedures to proactively address threats specific to their business model, geography, regulatory environment, and technology portfolio.
  7. Build and promote a risk-aligned security organization. Security should be firmly embedded within the organizational culture, and elevated as a strategic area of focus at the executive and board level.

Additional detail about these recommended areas of focus is outlined below:

1. Listen to the voice of the customer

As the KPMG Consumer Loss Barometer has shown, you would be hard pressed to find a customer that would not express some level of concern, anxiety, or anger at the news of their sensitive personal information being exposed. However, in some cases (based on the type of data collected from customers, how the data is used, and whether it is shared with any third‑party partners) the impact and exposure of an incident or breach may be lower for some customers and industry types than others.

As a business leader, you should seek to understand what the true customer impact would likely be in the event of a cyber incident. This can be done through analyst research, expert thought leadership, customer surveys, and industry peer impact for analogous incidents. This analysis should be used to drive a rough ‘order of magnitude’ estimate of how much risk can be tolerated, based on factors such as:

  • An evaluation of how consumers behave based on the way the loss is reported, the loss itself, and the follow-up (credit monitoring, etc.)
  • A comparison to the cost of current cyber program
  • Consideration of overlaps in technology
  • A review across the suppliers of the ecosystem.

Armed with that information, companies can drive more effective decision making, as risk treatment decisions are not made in a vacuum but with appropriate context. Ultimately, cybersecurity risk should be treated like any other business risk, whereby risks are identified, assessed (both within an IT an organizational context), and treated to a level aligned with organizational risk tolerance.

2. Understand your data

A clear understanding of what customer data is collected, stored, processed, shared, and retained/destroyed is fundamental to the ability to effectively protect that information. This life cycle view helps to ensure end-to-end visibility into the processes and resources that touch sensitive data and ultimately allow for granular application of controls based on determination of identified risks (see Figure 2).

C-2016-3-Lucas-02-klein

Figure 2. Information Lifecycle Management Stages. [Click on the image for a larger image]

Some key considerations within each life cycle stage are shown in Table 2.

C-2016-3-Lucas-t02-klein

Table 2. Key considerations for each information lifecycle stage. [Click on the image for a larger image]

Collection and maintenance of a holistic inventory of customer data instances and flows (not only in general terms, but down to the detailed system and process level) is no small feat. However, it is a required facet of an effective information categorization and classification framework and is necessary to help ensure comprehensive application of controls in the most efficient and effective way possible. Recommended steps to build and maintain such an inventory are outlined in Figure 3.

C-2016-3-Lucas-03-klein

Figure 3. Information categorization and classification approach fundamentals. [Click on the image for a larger image]

3. Make measured investments in cyber capabilities based on risk

It is critical that companies quantify cybersecurity risks. This should be accomplished using a “value at risk” calculation that incorporates breach likelihood and its corresponding business impact. These risks must be viewed through the lenses of a cyber threat to business objectives: How does a cyber threat actor interrupt or prevent the achievement of core business goals, such as capitalizing on megatrends, adopting new digital channels, or overseas expansion? Ultimately, this quantification is focused on helping security organizations to better articulate return on security investment (ROSI).

C-2016-3-Lucas-04-klein

Figure 4. Value at risk calculation. [Click on the image for a larger image]

Companies should consider which assets are most critical to enabling core business objectives and evaluate the cyber threat landscape for risks to these key, crown-jewel assets. The inverse relationship also bears close scrutiny as it illuminates both common, expected risks – those that are observable and manageable – as well as those that occur less frequently – high impact events with growing uncertainty – that test a company’s resilience.

Once risk is quantified, consider linking decision-making to the amount of risk that the enterprise is willing to assume. For those whose brand reputation is fragile and unable to sustain a sizable interruption, decisions will reflect a risk view that places value firmly in a manageable zone of routine, where losses are minimal and predictable. Other companies may be able to assume more elevated risk profiles.

After the company has quantified risk and makes decisions about its risk tolerance, it should pursue programs that accommodate these perspectives, modifying existing initiatives while undertaking new ones in an ongoing effort to mitigate vulnerabilities. For example, a company seeking to expand via acquisition may need to focus on building quickly extensible IT services, including security capabilities designed to be consumed across a number of different platforms, mitigating the risk incurred by a new division’s people and technology. Conversely, a company planning a series of divestitures should be focusing security efforts on identifying sensitive data assets and the capability to restrict access quickly following the separation.

Take a true, enterprise risk view of cybersecurity (prioritization within the context of other initiatives).

4. Regularly measure the effectiveness of your security investments

Most companies do not fully understand the cost of cybersecurity. It is not that they are unwilling to determine this cost, but rather that the process is fraught with complexities, making it impractical for many to complete the process with sufficient precision. As a result, they are unable to produce an operating model that mitigates risk while optimizing cost.

As with any investment, business leadership should challenge information security leaders to build a business case that clearly articulates and captures the value associated with proposed cyber initiatives. It is important to implement the same structure and rigor within this domain as you would apply to any other functional area within the organization. However, ultimately security is a risk-focused discipline, which means that the true value of information security is its ability to limit risk exposure to a level aligned with organizational risk tolerance.

The true and total security cost includes those elements that are easy to tally, such as hardware and software components – as well as those less tangible elements, such as those tied to the company’s third-party contracts (IT hosting, supply chain services), labor, regulatory compliance, and vendor and supplier management, among others. The latter are far more difficult to uncover and tally, particularly in complex sourcing arrangements. For instance, is a patching service level agreement with an outsourcer a component of the security program? What about the cost incurred by vendors to comply with controls required in third-party risk programs?

It is incumbent upon information security leadership to establish a clear and concise set of metrics to track, measure, and monitor both program effectiveness and improvement. The goal of the information security organization should be to establish transparency and visibility into data protection risks for the executive management team, helping to drive actionable intelligence and decision making in an appropriate context. At the end of the day, information security should be one of many tools in the executive arsenal to mitigate risk to an acceptable level.

5. Develop/align the right cyber risk management model

An effective cyber risk management model is at the heart of an effective risk mitigation strategy. Leading practice models incorporate fundamental cybersecurity practices as well as tailored organizational risk tolerance, all in an effort to maximize cyber investment.

Further, a cyber risk management model should not operate in a vacuum, but rather help to present a consolidated view of the risk factors within that domain, to be consumed and evaluated as part of a broader enterprise risk management framework. This will help to ensure both consistency in the way cyber risks are reported, as well as helping to ensure cyber risk is evaluated in an appropriate broader, organizational context. From this, critical risks (and associated initiatives) rise to the forefront and receive an appropriate level of attention, focus, and investment.

Information security leadership should work to help ensure management prioritize the concept that risks exist – and will continue to exist. In other words, the goal of a cyber risk management model should not be to eliminate the risk of information loss or exposure, but rather to prioritize limited resources to strike the right balance between investment and risk tolerance. This mindset can also help to ensure a proactive, ongoing approach to risk management, with risks managed and recalibrated periodically based on changes to the customer, business, regulatory, and/or technology landscapes.

Finally, consider your assets in the broader context of your business and its true cost of security services to protect them, allocating resources intelligently – efficiently – based on that analysis, keeping in mind that the allocation will change as your business evolves and grows.

6. Continually update your model to reflect emerging threats

It is critical that information security leaders and business executives stay abreast of emerging threats and trends. Cybersecurity is an elusive target that mandates continual vigilance. At the same time, rest assured that, like fraud, cybersecurity is addressable and manageable. To do so requires modifying your mindset from “fix, fix, fix” – an entirely reactive process that will never adequately protect your assets, to a more systematic, business-focused issue that will require ongoing funding to address new capabilities as the needs arise. Such a shift in mindset shifts the focus from technology spending and repositions cybersecurity as innovation spending, a more practical characterization that facilitates corporate growth and the ability for it to evolve fluidly as business models dictate.

7. Build and promote a risk-aligned security organization

In addition to the systemic changes around identifying, measuring, and managing cyber risks, another important but often overlooked aspect to effective cyber risk management is building and continually developing a risk-aligned culture within the security function, as well as the broader organization. This often entails a transformation that shifts the focus from security projects and activities to risk mitigation initiatives. These transformations are only successful if cybersecurity is elevated as a strategic priority and a top-down focus is established on managing cyber risks through the security program. Any initiative undertaken in the security area needs to be aligned with a risk which is tied to a threat and crown jewel/business driver. Many organizations take this as an opportunity to do a skill analysis of their security teams in order to evaluate the readiness to adopt and align with this model.

Conclusion

An evolving, maturing set of customer expectations coupled with an increasingly complex and challenging threat landscape requires a thoughtful, risk-based approach to help ensure an effective and efficient information protection strategy. Rather than dancing to the deafening, consistent drum beat of “fix, fix, fix” and “spend, spend, spend,” the prudent executive will implement a new model that helps maximize the value of security investments – balancing risk acceptance, mitigation, and transfer with the protection of a firm’s assets. It is the difference between transforming your business strategy from one that is draining and reactive to one that is energized and proactive.

References

[Expe15] Experian, 2015 Second Annual Data Breach Industry Forecast, Experian Information Solutions, Inc., 2015

[Forr16] Forrester, Four Ways Cybercriminals Exploit Social Media: The Weaponization Of Social Media, And How You Can Fight Back, Forrester, 2016.

[GOVU16] GOV.UK, FTSE 350 Cyber Governance Health Check HM Government, 2016.

[KPMG15a] KPMG, Audit Committee Institute Survey 2015, 2015.

[KPMG15b] KPMG, 2016 KPMG CEO Outlook, 2016.

[KPMG16] KPMG, 2016 Consumer Loss Barometer: Cyber Industry Survey, 2016.

[NYT16] The New York Times, The Data Explosion Makes Storage Tech Exciting, March 2016, http://www.nytimes.com/2016/03/16/technology/the-data-explosion-makes-storage-tech-exciting.html?_r=0

Disclaimer

Some or all of the services described herein may not be permissible for KPMG audit clients and their affiliates. The information contained herein is of a general nature and is not intended to address the circumstances of any particular individual or entity.

Cloud Access Security Monitoring: To Broker or Not To Broker?

When moving to the cloud, enterprises need to manage multiple cloud services that can vary significantly from one to another. This, together with the modern culture of working anywhere, anytime, from any device, introduces multiple security challenges for a company to resolve. The article explores the possibility of deploying Cloud Access Security Brokers (CASBs) to help enterprises stay in control of their information security when using various cloud services.

Introduction

This year many Dutch companies were busy with cloud transformation programs, moving towards the “Cloud First” goal of their future enterprise IT. One of the key challenges enterprises are starting to face when moving to cloud is that it is extremely hard to ensure security for a variety of cloud services, each with their own unique settings and security controls, compared to the management of on-premises systems and applications. In addition, the mobility of the modern workforce is higher than ever before, when employees can easily access cloud systems and applications when off-premises and using personal devices or personal identities, not managed by the enterprise IT. This new context for enterprise IT – multiple clouds and extreme mobility, makes it hard for companies to keep up with all security risks that this new context introduces. This article examines the case of Cloud Access Security Brokers (CASBs) – a possible solution for the security of multiple cloud services in the operation of a mobile enterprise.

A State of Cloud Security in 2016

A famous Fokke & Sukke cartoon asks “Do you believe in the cloud?” ([RGvT12]). 2016 is finally the year where many of our clients are not only saying “Yes, we believe there is something”, but they are already in the middle of execution of their cloud related programs. Moving to AWS, or Azure infrastructure, shifting to Office 365 for e-mail and collaboration are no longer unique use cases for the Netherlands. Cloud is transforming from being just one out of many projects within IT to serving as an actual context for the enterprise IT existence. A typical company of 2016 is already using IaaS with their VMs running somewhere in the cloud, PaaS utilized for apps development and management, and quite possibly its CRM, ERP, e-mail or collaboration software has already been procured as SaaS. The fact that becoming a cloud user only requires a few clicks on the web and completing credit card details increases cloud adoption even more, making enterprises going off-premises just a matter of time, money, and Internet existence.

C-2016-3-Kulikova-FS

It is great to be in the middle of this enterprise transformation – when cloud technologies are becoming an essential part of IT programs. Enterprise IT was revolutionized by the Internet, and Cloud (and mobile) technologies continue this move by further liberating the workforce from the office space, and bringing offices to employees homes. Still, without proper awareness and good risk programs, situations endangering the security of organizational information can easily happen.

Consider the following example. Many corporations use cloud storage solutions for their work, for example – Microsoft OneDrive or Google Drive. When they want to share documents outside the organization, they invite external users to join their cloud folder for example in Microsoft OneDrive. An invitation is usually sent to a work e-mail address of the external user with a link to join the cloud folder. Often, due to certain settings in Microsoft, if a user already has a private Microsoft account, he can access the shared business folder with this account. All he needs is just the link; there are no other enforced authentication mechanisms. And here comes a risk for the business. Links can be easily shared; and what about finding and proactively removing all those private accounts joining a corporate cloud environment?

To address these questions and concerns, you would typically see organizations start using the combination of the words “cloud”, “security”, and “risk management”. The problem here lies in the amount of different cloud services a company needs to manage to ensure security of their critical assets. The ideal situation would be that the shift to these multiple cloud services is performed in a risk neutral way, meaning staying risk-comparable with the previous IT set up. However, it is hard to ensure security for a variety of cloud services, with their own unique settings and security controls, compared to the management of on-premises systems. Most of the cloud security risk workshops we facilitated for our clients were aimed in addressing security issues for particular cloud solutions like Office 365, Salesforce, Google for Work, Evernote, etc. While a good exercise in general, such workshops only address a particular cloud service. The strategic question would be – what should be a sustainable approach to the security of multiple, if not all, cloud solutions in the operation of an enterprise.

CASB – A Silver Bullet for Enterprise Cloud Security?

International organizations like ENISA, CSA, NIST, ISO have produced multiple articles and best practices in attempt to standardize the approach for security of cloud solutions. Yet, key cloud speakers at the 2016 RSA Conference, admitted that finding a unified way to ensure the security of cloud services is still under investigation and construction ([Fult16]). One of the key initiatives since 2015 is the Cloud Security Open API – an initiative driven by the Cloud Security Alliance to ensure that in future all cloud services and enterprise security monitoring tools can “talk” using the same APIs ([CSA15]). This will allow standardization of the security of the cloud stack and eliminate the headache of designing unique security controls for different cloud services.

While Cloud Security Open API remains work in progress another, and more importantly already existing, approach for cloud security, was mentioned multiple times during the RSA. Many presentations were talking about Cloud Access Security Brokers, or CASBs, in such a way that made them sound almost as a silver bullet for cloud security. The term “CASB” was introduced by Gartner several years ago as one of the key ways to control the business risks when moving to cloud ([Gart16]). The CASB market has grown significantly since then and is now blooming with multiple providers offering cloud security services, such as SkyHigh Networks, CloudLock, Elastica, Netskope, Adallom. Some of them have been acquired by the IT giants such as Microsoft (Adallom), BlueCoat (Elastica) or Cisco (CloudLock) to name but a few.

If you look at the market acceptance of CASB solutions ([TMR16], [MM15]), then North America and Asia-Pacific are the main regions embracing the CASBs, compared to the lower adoption rates within Europe. With the forecasted growth of the CASB market from USD 3.34 billion in 2015 to USD 7.51 billion by 2020 ([MM15]), the question is why Europe, and the Netherlands in particular, seem to have less interest in acquiring CASBs as a means to control their cloud services? In the remaining part I will explore the key benefits, drivers and pre-requisites for adopting a CASB solution, to highlight the potential of CASBs in the enterprise cloud security.

Key Security Features of CASBs

I have created Figure 1 in an attempt to illustrate the full scope of the CASB offering – why, what and how they deliver their services. CASB business case starts with the technological context in which modern corporations operate. Think about the ways in which employees can access cloud resources nowadays. They can do it by being on the enterprise network (on-premises), or on any other network (off-premises). They can login from the enterprise managed devices, such as corporate laptops and mobile devices with installed MDM, or using their private laptops and smartphones. Finally, an employee can use his work or private identity to login to the cloud service (as in my example in the beginning of this article).

C-2016-3-Kulikova-01

Figure 1. Organizational assets versus CASBs capabilities.

CASBs can help companies enhance security for all these scenarios. CASB software provides multiple security features, ranging from discovering cloud services used by employees, highlighting key risks of such usage, protecting data stored and processed in cloud, providing end user behavior analytics and performing some form of malware and threat prevention. In short, CASBs are monitoring in real time what is going on with the enterprise cloud – its ins and outs. CASBs can deliver on their promises due to their ability to integrate with already existing security tools, by analyzing traffic as a reverse or forward proxy, and by connecting directly to cloud services via their APIs (for more architecture details, refer to Gartner publications [Gart15a] and [Gart15b]).

To summarize the key security features that CASBs can deliver:

  • Cloud visibility. CASBs help enterprises to discover all cloud applications used by the enterprise employees and associated business risks. This addresses the issue of “Shadow IT” or unsanctioned apps within a company. The cloud discovery analysis will show, for example, how many different storage solutions, such as Google Drive, OneDrive, Box, Dropbox, Evernote, etc. are in use by an enterprise employees, and what the risk rating of each of those services is. To note, many enterprise on-premises tools, such as Secure Web Gateways (SWGs), can already show where their traffic goes. The advantage of CASBs is that they can also monitor traffic from users that are off-premises, and that CASBs have a large database of cloud services assessed regarding their potential security maturity that a company can rely on.
  • User behavior analytics. CASBs can provide real-time monitoring of user activities, including high-privileged actions, and alerting or blocking strange behaviors (for example, an employee downloading a large volume of data, the same user account used from different locations within a short period of time, or a user using his work-identity to connect to cloud services for private use). While many big SaaS providers also offer DLP-like functionalities, the advantage of CASBs is that rules for user behavior analytics can be set up one time and across multiple cloud platforms.
  • Data security. CASBs can act as Data Loss Prevention tools with the help of data-centric security policies such as alerting, blocking, encrypting or tokenizing data that leaves to go to the cloud. With respect to encrypting/tokenizing data, a client can choose whether to encrypt all data (less recommended) or only specific files and fields.
  • Malware detection. CASBs can help companies identify and cure malware across cloud solutions that offer API integration, such as Amazon S3, Office 365, Google for Work, and by integrating with on-premises or online anti-malware solutions and anti-virus engines. CASBs can also prevent certain devices and users from accessing cloud solutions, if required to stop malware spread.

Practical Take-Aways

KPMG has been working with CASBs in recent years, for example, as part of Cloud Security Readiness Assessment projects. Below are some of my key observations from using CASB tools at our clients about what a company wanting to adopt a CASB should consider:

  • Do not just purchase a well-known CASB, clarify the use-cases first. Understanding the use cases for which a company plans to deploy CASBs is very important. Is the client considering monitoring the user behaviors in specific SaaS or controlling overall traffic? Is just a compliance check needed when reports are being generated showing where the corporate data goes? What about access to the cloud from (unmanaged) mobile devices, does the client also want to know about this? The answers can largely affect the choice of which potential CASB provider to opt for.
  • Do not rely on default CASB settings, set up the right security policies. CASBs rely on rules or policies to generate security alerts, classify them and push that information to the built-in dashboards or to a client’s SIEM. An example would be sending an alert whenever a file with credit card data is being uploaded to the cloud, or if a user from a certain country is trying to access a cloud service. Many security operation departments lack the staff to keep up with all the alerts, and adding alerts from CASBs only aggravates the situation. Ensuring the strict policies will guarantee that the additional alerts bring actionable information and not just more noise.
  • Integrate with the enterprise IAM for the maximum benefit. To achieve the most from CASB functionality, such as the ability to alert on access to cloud services from unusual or prohibited locations, or prevent access to cloud services from unmanaged devices, it is essential that a company can connect to CASB with its enterprise Identity and Access management system (can be either on-premises IAM or IDaaS). Done correctly this will reduce the risk of unauthorized access to cloud services. For more information on using IAM for cloud solutions, please refer to [Stur11] and [Muru16].
  • Connect to cloud/on-premises SIEMs. Having one source of alerts has proven to be a better and easier way for your employees to monitor and react on cloud anomalies. Many companies already use, for example Splunk, dashboards and other on-premises security and event monitoring systems (SEIMs). Many CASBs vendors allow integration with these tools.
  • Streamline users to specific cloud providers. Finally, once a company understands where the traffic comes from and goes to and their user behavior patterns – they should build upon this knowledge by promoting specific cloud tools (for CRM, collaboration, storage, etc.) to minimize the amount of different cloud software being used for the same purpose. Banning cloud providers, for example Slack for project management, will not help much, as there will always be alternatives available (e.g. Teamwork or Trello) that users can easily switch too.

Conclusion

Even if CASBs are to be called a silver bullet for cloud security, any bullet still requires someone to shoot it. Organizations are responsible for ensuring a proper selection and integration of a potential CASB in their IT landscape. By taking into account the abovementioned considerations, enterprises that plan to deploy CASBs in order to increase their cloud security, can start their brokerage journey with a set of concrete decisions to make. This will ensure that the right CASB provider is chosen to fit the enterprise needs, and that CASB is “tuned” for the maximum security benefit of the enterprise.

References

[CSA15] CSA, Cloud Security Open API: The Future of Cloud Security, 2015, https://blog.cloudsecurityalliance.org/2015/06/29/cloud-security-open-api-the-future-of-cloud-security/

[Fult16] S.M. Fulton, RSA 2016: There Is No Cloud Security Stack Yet, The New Stack, March 2016, http://thenewstack.io/rsa-2016-no-cloud-security-stack-yet/       

[Gart15a] Gartner, Market Guide for Cloud Access Security Brokers, 2015, https://www.gartner.com/doc/3155127/market-guide-cloud-access-security

[Gart15b] Gartner, Select the Right CASB Deployment for Your SaaS Security Strategy, 2015, https://www.gartner.com/doc/3004618/select-right-casb-deployment-saas

[Gart16] Gartner, Cloud Access Security Brokers (CASBs) (definition), http://www.gartner.com/it-glossary/cloud-access-security-brokers-casbs/

[MM15] MarketsandMarkets, Cloud Access Security Brokers Market by Solution & Service, December 2015, http://www.marketsandmarkets.com/Market-Reports/cloud-access-security-brokers-market-66648604.html

[Muru16] S. Murugesan, I. Bojanova, E. Sturrus and O. Kulikova, Identity and Access Management, Encyclopedia of Cloud Computing (Chapter 33), Wiley, 2016, http://onlinelibrary.wiley.com/doi/10.1002/9781118821930.ch33/summary

[RGvT12] Reid, Geleijnse & Van Tol, Fokke & Sukke cartoon, 2012.

[Stur11] E. Sturrus, J. Steevens and W. Guensberg, Toegang tot de wolken, Compact 2011/2, https://www.compact.nl/articles/toegang-tot-de-wolken/

[TMR16] Transparency Market Research, Global Cloud Access Security Brokers Market Revenue, by Geography, TMR Analysis, March 2016, http://www.transparencymarketresearch.com/cloud-access-security-brokers-market.html

How Cyber Security Conquers the To-Do List of the Board

Awareness. You must have stumbled across this word hundreds of times when reading about the issue of cyber security. The main point in many articles is that it is quintessential that today’s executives and decision makers in the private and public domain realize what is at stake. Because this awareness is the start of a safer world.

Well, I’ve got news for you. Lack of awareness is no longer the issue. Media reports on cyber incidents have been all over the place in recent years and these have served their purpose. As a consequence almost every executive now seems to be convinced that organizations need to invest in prevention, detection and response measures in the cyber domain. Cyber security has clearly secured a prominent place on the board agenda. Moreover, it is satisfying to note that many executives fully realize that acting out of fear for the consequences of an incident may not be the best way to react in this domain. A much better strategy is to build an effective risk-based approach, thereby taking a more proactive attitude.

So much for the good news. It is now time for the next step: moving the issue from the board agenda to their to-do list. In other words: it is time to take action.

This is perhaps the hardest part of the cyber security challenge. We know what is at stake and we have figured out a vision on the issue. But the litmus test is: do we know what action to take? The pressure is getting higher, not only because threats are very real but also because oversight bodies, the audit community, policy makers and other stakeholders pose higher demands than in the recent past. By the way, this does not only apply for private firms, but also for government bodies and sector initiatives.

I see promising initiatives all around me that prove our ability to act. Not only because organizations deploy state of the art tools and programs, but also because they start to focus on what the end user must do in the domain of cyber security to take strong actions in order to prevent cyber security incidents happening in the first place. One example is that gamification makes it more tangible for them to know how to act and how to spot dangers. In addition, gamification also helps to move cyber security from “fear” to “fun”. At KPMG we recently launched an interesting platform with our partner G4S in this respect.

Another promising field is that we are starting to find new ways to solve the classic paradox between security and ease of use. I am convinced that we should put more focus on the ease of use for the end-user when it comes to protection measures. End-users will simply not be willing to adapt to security process or routines if they feel that it is too much of a hassle to comply with these measures. For instance, there is quite some potential in using face recognition and other easy to use approaches to overcome the hurdles of the infinite numbers of user-ids and passwords.

Another area is not in the field of the individual user but is a more centralized issue. I’m talking about threat intelligence. Smart combinations of data from different sources offer us valuable insights for better anticipation of emerging threats. Getting the data is often not the issue. Spotting the valuable information in this ocean of data and translating it into actionable results is the real challenge. Again, we are witnessing real progress in this area lately.

To conclude, I am pretty optimistic about the progress we are making. Of course we will never reach the Walhalla of a world without cyber incidents. But we are well on our way to turning this issue into a controllable issue, controlled by cyber resilient organizations. I hope the articles in this edition of Compact contribute to this.

Playing in a Mobile World

The story of Pokémon Go makes an excellent case for the current reality in IT: we are living in a mobile world. Mobile technology is becoming one of the key competitive advantages an organization can have in their IT landscape. Using mobile technology, organizations can obtain new kinds of engagement from both their employees as well as their customers. To gain the maximum possible from this new technology however, an organization must be well-prepared from a cyber security perspective.

Introduction

In the early summer it was not an unusual sight to see people staring more intensely at their phones all over the world. Not because of the usual and clumsy activity of checking e-mail or messages while on the go, which we have become sort of used to by now, but because of the hunt they were on – a hunt for Pokémon. It was hard to miss the Pokémon Go hype: by installing an app on your iOS or Android smartphone a world of wild virtual monsters called Pokémon was overlaid on top of the real world. The hype was not restricted to school-going children and teenagers –two generations of young professionals who were brought up with Pokémon and are now our clients and colleagues were also taking part. And that is a good thing, as these are the generations that will help us make sense of an increasingly mobile world.

Pokémon Go’s hype is an excellent case to discuss the new mobile world we find ourselves in. A world in which organizations can reap dramatic benefits in areas such as customer and employee engagement, productivity and the opening up of new ways of working. But these benefits can only be achieved when the organization also successfully addresses new and related issues of mobile cyber security and privacy. In this article we will take a brief glance at addressing these issues based on the Pokémon Go case study.

It might be difficult to see how a mobile world is significantly different from the digital world that we have known for a few decades now. After all, are the concepts of a mobile world not just the same old concepts of a different world in a different disguise? Looking at the buzz around Pokémon Go, this is clearly not the case. With your smartphone camera ready, every building, park, museum and other landmark becomes riddled with little fantasy creatures to collect. It is up to the player to collect as many of these creatures, or catch ’em all. And – for a period of a few weeks – this concept took off extremely well. So well in fact, that some places such as museums and hospitals had to actively keep Pokémon hunters out to keep order. You have probably seen the groups of players yourself. Within a matter of days after launch, Pokémon Go surpassed the popularity of both Twitter and Tinder, both ‘established’ brands in mobile. Clearly, Pokémon Go’s unique formula of combining the 20-year-old concept of Pokémon with new possibilities in augmented reality struck the right chord. It clearly demonstrated a concept that was not possible 10 years ago, allowing enthusiasts to enter a virtual world within their own (real) world, integrating the gaming experience into their daily lives. It showed that mobile technologies have the potential to affect every part of our daily lives, wherever we are and whenever we want. Everything is mobile, mobile is everything.

Mobile is Everything

Not coincidentally, the concept mobile is everything was also the slogan of the most recent Mobile World Congress (MWC) in Barcelona. The Mobile World Congress is the premier event in the world for mobile technology, as key mobile technology vendors and service providers come together to showcase their products and future vision. The slogan mobile is everything proved an excellent summary of the direction technology is taking in two ways: first, as stated before, mobile is everything because mobile allows the digital world to apply to every part of our lives. Not just through our smartphones and tablets, but also through wearable smart devices like smartwatches, health trackers and digital assistants; through Internet-of-Things devices that make your home and workplace smart and more adaptive, controlled through your smartphone or tablet; through improvements in virtual and augmented reality technology that allow us to reimagine existing places and landmarks in different ways. The second meaning of mobile is everything is more subtle but has an even greater impact on our services: although mobile technology is often seen as different from ‘traditional technology’, this is an arbitrary and nonsensical distinction. What we used to call mobile technology is rapidly replacing traditional technology (e.g. your Windows laptop now has a touch screen and runs the same ‘apps’ as a Windows phone does), and traditional technology is adapting to what is traditionally considered a mobile use case (e.g. SAP is focusing their strategy on an app-first enterprise). What we call mobile technology is just tomorrow’s technology with a different name.

Besides creating a new gaming experience, Pokémon Go also introduced a new business model for its creator. In Pokémon Go, advertisers can purchase virtual “lures” that will attract rare Pokémon to a location of their choosing, allowing organizations to draw crowds of players to their shops/offices/venues. How convenient it is to catch an exotic virtual creature, while running for errands at the same time! Examples like these show that the mobile is everything vision is not a far-fetched, speculative vision of the future. The future is already here (and people are making serious money out of it). At the peak of its hype, Pokémon Go was known to be used as an argument for such grand things as the sale of houses, or the selling of insurance policies. And it worked – because it was a way to engage a crowd that was not targeted before.

Engaging Your Organization

Engagement through mobile technology also applies to employee engagement: digital services provided by a workplace are known to engage newer generations in their work more than ever. Newer generations of employees – including the ones who were thrilled when Pokémon Go was released – have grown up with modern technologies and expect their employer to provide (at least) the same speedy, convenient and shiny tech at work as the ones they have become used to at home (or anywhere else for that matter). Offering employees a compelling mobile work environment, and therefore creating a match between the work environment and modern lifestyles, has become a must for ensuring employee satisfaction and high employee engagement.

A High-Tech Braking System

It sounds exciting. Mobile technology will enable a faster-moving, smarter world, with opportunities for higher employee and customer engagement, more efficient work and new business models. It is a fast-moving train that everybody should hop on as soon as they can. Or should they?

Before jumping on a high-speed train, one must first know that its components are working well. To move fast, the train should have a good engine, but it should also have an excellent brake system. After all, without well-working brakes, the train cannot possible achieve its maximum speed safely without causing disaster. In this analogy, mobile technology is the new, potent, high-tech engine. But its brake system is cyber security and privacy: the necessary precautions that allow us to go fast in the first place.

When Pokémon Go was launched, it did not launch globally. Due to the immense popularity and the required infrastructure for this popularity, it took two weeks for the app to launch in various country-specific app stores across the world, including the ones in the Netherlands. This did not stop hundreds of thousands of avid Pokémon fans installing the app in unofficial ways however; with some tech-savviness one can easily install application packages downloaded from random internet sources on Android devices. Needless to say, it was a matter of hours before cyber criminals started spreading malware-equipped versions of Pokémon Go across the internet. Due to severe shortcomings in the availability of security patches for Android devices, many pieces of malware were (and still are) able to obtain complete control of infected devices. To this day, there will be thousands and thousands of Android devices compromised in this way and under the control of hackers, including devices that have access to sensitive enterprise data.

There were more problems for Pokémon Go’s developers. Pokémon Go used Google to provide authentication services for the app, in order for users to set up an account that kept track of the Pokémon the player has collected. On the iOS version of the app, the developers requested the user to give them complete access to their Google account. This would open up the player’s Google search history, personal e-mail and location history to the Pokémon Go developers. A clear privacy breach, that took a few days to be resolved. The reputational damage to the Pokémon Go developers had already been done.

These issues could have been resolved had the developers employed a more effective mobile strategy. A mobile strategy that takes into account a secure deployment and distribution program. The distribution program would take into account mitigation measures for hijacking the application using malware. And it would comprise a secure software development lifecycle to prevent and test for (severe) programming errors from causing security and privacy issues in the first place. Clearly, Pokémon Go was a fast-moving train with faulty brakes.

To prevent our own clients having faulty brakes, our cyber security team is focusing on improving the mobile strategy for our clients. Having a formal mobile strategy is a rarity in practice; mobile strategy is typically integrated in a broader technology strategy and tends to focus on cost-saving and enhancing sales channels. Employee productivity, cyber security and privacy issues are not a common component of these strategies. Helping our clients to integrate these issues with their own business goals is a key component of our professional services. Developing a mobile strategy is supported by assessing privacy and security requirements, monitoring the continuous changes for these requirements in a mobile world, and implementing technology purchasing plans, cloud strategies and secure development lifecycles.

An End to the Hype?

It is both ironic and exemplary that the hype surrounding Pokémon Go is fading. Statistics show that Pokémon Go has lost millions of users since the peak of the hype, and that numbers are continually dropping. The drop in active players is most likely due to game mechanic issues rather than severe shortcomings in a mobile strategy, and it is not out of the question that the hype will flare up again as the developers add new gameplay mechanics. Regardless of this fading hype, even with ‘just’ tens of millions of users left, Pokémon Go is still a resounding success.

Although the hype for Pokémon Go has ended, this does not stop the application from being a prime example of a new mobile world. Filled with tremendous possibilities in new technologies, new business models, and new ways of engaging users, Pokémon Go shows that every organization should concern itself about a mobile strategy to stay relevant. And it also shows that cyber security and privacy must be an integral part of that mobile strategy, as there are widespread consequences when this is missing. KPMG is already playing in this mobile world – and we are calling on all of you to join too!