Back To The Blog

Quantifying Cyber Risk

Risk Management / October 08 , 2019

A Q&A with Peter Armstrong of Munich Re and Julie Eichenseer of Guidewire: Cyence Risk Analytics
At the NetDiligence® Cyber Risk Summit in Philadelphia, Julie Eichenseer of Guidewire: Cyence Risk Analytics and Peter Armstrong of Munich Re participated in a panel on cyber risk quantification, discussing how current approaches to evaluating cyber risk can better help the insurance industry and its clients improve their cybersecurity posture. We spoke with them about some of the topics they covered.

Cyber risk is complex and riddled with variables—what are some of the challenges facing a risk quantification assignment?
PA: One of the biggest issues is a common understanding of language “techie speak.” The language of the geek world and that of the business world don’t often intersect and that can be problematic, so without compelling understanding of the financial impact of cyber risk, we tend to see organizations treating an investment in cyber defense as a discretionary spend without fully understanding the opportunity that meaningful quantification of the risk brings to the organization. The challenge, of course, is getting to the right numbers. The second thing is that it’s difficult to generate that number that represents incremental exposure consequent upon cyber risk because there are so many variables at the front end of the risk shaping the scale of the impact of a cyber event. That’s why it becomes so critical for people to understand the assumptions they’re making and what specifically they want to measure and quantify. We need people to start to recognize that incidents will happen and to focus on quantifying the severity of the event.

JE: Accumulation paths are often non-obvious, and the ways in which they are exploited are constantly changing, which heightens the importance of having the ability to collect and curate data at scale more than traditional lines of coverage. Data collection and curation is a non-trivial problem—there are barely emergent standards for terms, and the data is rarely collected consistently, let alone structured in the ways traditional actuarial approaches would require. We find, the data collected has too much emphasis on technical assessments when a large proportion of loss is driven by people and process failure. Insurance applications are true/false litmus tests that don’t provide a nuanced picture of the company’s people process or technology. This is where we can leverage artificial intelligence and machine learning to identify more granular data at scale for better results so as the risk changes you can shift.

What are some basic problems you see with the current approach to quantifying cyber risk?
PA: Principal amongst these is the fundamental discipline of management of cyber risk within an ERM framework. For every other critical enterprise risk, first we quantify the exposure and then within the context of affordability, risk appetite and tolerance so we can we make informed decisions on the deployment of risk capital to be deployed across three active interventions: Mitigate the risk; retain and fund the risk; transfer the risk. Only then can we measure and manage the reduction of residual risk outcomes. But for cyber we are not doing this well if at all. A second part of the fundamental challenge is the nature of cyber risk itself, which needs to be viewed through two lenses: one is discrete cyber risks, the second is incremental cyber exposure. For instance, if I am a retailer seeking to get closer to my customers with a digital strategy underpinned by an online presence and my developers leave the principle credit record in the web app, then I’ve got a discrete cyber risk: this is where much of the direct cyber insurance is targeted. What really moves the needle in an organisation’s risk portfolio are those incremental risks where cyber vulnerabilities can lead to enablement, acceleration or amplification of those critical enterprise risks that we are already worried about. The context therefore is how to quantify the degree to which the portfolio as a whole is impacted by cyber risks. Generally, this is poorly addressed, so the key element here is what we must quantify.

What is the goal or purpose behind RQ?

PA: The goal has to be to normalize cyber risk, to put boards back in control and to be able to make the same kind of informed risk capital deployment decisions for cyber and cyber-enabled risks that they do for their existing critical enterprise risks. We also want to enable a financially derived decision with universally trusted numerics and metrics that over time can lead to the evolution of more robust quantification models upon which individual insured risk capital decisions can be made—but also upon which investment and alternative cyber risk capital investment decisions can be made.

JE: For insurance, it would really depend on your role in the ecosystem. For carriers, it helps with differentiating risk, enhanced balance sheet protection and increased underwriting efficiency (no/low touch underwriting or reduced time to complete analysis). For brokers, it helps better understand how to measure client risk, how to mitigate that risk and how to ensure it fits within the insurer’s risk tolerance. For regulators and rating agencies, risk quantification helps monitor domiciled firms day-to-day without the need to physically be in their office.

When looking at probability versus severity, is one harder to predict than the other?
PA: We have to make the assumption that the probability values are converging on one and in those circumstances it has to be about severity and an ability to quantify those elements—the anatomy, if you like, of the things that determine if this inevitable event is a big one or a small one. This means being able to quantify the balance between the effectiveness of the threat landscape versus the effectiveness of the defense surface. The sad truth here, and one of the drivers of the convergence on one is that the effectiveness of the defense landscape is almost always less than the effectiveness of the threat surface. This means that our quantification values are almost always multipliers relative to the existing risk provision. This leads to a simple conclusion: Pretty much everyone is underestimating the impact of cyber risk upon the risk exposure in their portfolios. It is also important to reflect that the law of large numbers (at the heart of our underwriting) is under pressure because of the almost infinite variety of conditions at the front end of the risk that determines both probability and more significantly whether the event will be a big one or a small one. It is worse in some respects than this suggests, in that whilst these events occur on a continuum of human activities from malevolent intent on the one hand and stupidity on the other (which of course is why there is no such thing as absolute cyber security).

JE: Both are hard. Both are hard because we’re predicting the future. Not only that, but we’re doing so in a complex and rapidly evolving domain where there are often people making critical operational decisions while lacking relevant cybersecurity expertise, and some of the best cyber experts in the world are adversaries with various motives, none of which are good for business or society. The frequency or likelihood of something happening is much harder to predict than what it’s going to cost. Of course, severity is hard, too, but let’s consider an extreme example: In a world where there has only been one historical event, it still gives you some idea of how much it could cost if it happened again. But in the same world where you’ve only had one historical event, how do you predict how likely it is to happen again? Or what if it happened again but with a slightly different outcome, which would change severity? Having a team of experts who understand how risk can be systematic based on significant research is key. While we recognize there is potential for CAT events, and there have been such events, predictive models are fit based on historical events. In a line of coverage like cyber, there is an increased level of expert judgement. So as you examine who you are partnering with for RQ, vetting the experts and their thought process should be a key component.

How might a cyber insurance carrier benefit from RQ?
PA: Understanding the quality of a risk in a richer and more comprehensive manner is always going to benefit a carrier in obvious ways around pricing, risk portfolio diversification and so on. However, it also opens the opportunity for establishment of de facto standard for cyber risk quantification over time as models are refined—rather in the same way that better quality, de facto adopted numbers evolved to support the formation of Cat Bonds and the Cat Bond trading opportunities. This demands better quality (auditable) evidence-based numbers at the heart of the quantification method and models. We must see over time that there are opportunities to see such alternative cyber risk capital instruments evolving and a global sequence of risk quantification assignments will eventually support and reinforce the development of the models and accepted risk quantification numbers to support this evolution. In truth, this is not a “nice to have.” We have an impending capital capacity cliff driven by the need to address silent cyber by converting non-affirmative covers into affirmative covers and affirmative exclusions. This is essential for carriers and insureds alike if we are to avoid challenging litigation issues like the Mondales property claim.

JE: RQ Tools provide more granularity and consistency. It gives some basis for efficiency and precision, in terms of account prioritization and prospecting. Many carriers can’t get to all the submissions that come in to them. With an RQ, you can allocate your time more strategically. RQ also helps the company adherence to its risk tolerance. The company can construct a book of business that fits within its desired risk tolerance and create pricing guidance using RQ tools that help to benchmark a technical price. As an underwriter, you are always asking yourself “what am I missing? What terms and conditions should I reconsider?” Carriers can create a better feedback loop between ERM and Underwriting to ensure the drivers of tail loss are well understood in the underwriting process. The RQ gives you more information to do that.

Are there any trends that particularly concern you from a risk perspective?

PA: One of the emerging trends we could point to is that before the last couple of years we would not have thought an attack could bring down infrastructure but now with the explosion of IoT networked botnets and unsecured interconnected IoT devices, we’ve seen DNS servers taken down on an international scale. There is a clear and present danger to worry about. The second trend is concern about industrial control safety systems such as Triconex vulnerability, which was the basis of a targeted industrial safety systems attack last year in the Middle East. Thankfully, it wasn’t a catastrophic event. Another trend we are concerned about is cryptojacking, taking on of the mantle of significant revenue generator for the criminal community. Sadly, there are loads of trends, hijacking the software supply chain (malware insertion) NotPetya was an example; pervasive operating system vulnerability exploits like Eternal Blue (Wannacry and NotPetya), Eternal Keep and so on.

Where do you see cyber risk quantification going from here?
PA: The whole process is in its infancy, and the practitioners and organizations taking on cyber risk quantification are trying to improve the level of awareness. If we were having this conversation 18 months ago there were few people even trying to articulate these questions but now the recognition of why we need to ask them is increasingly more real.

JE: There is no question that the risk landscape is changing in meaningful ways and we are seeing real incidents. Facebook, Equifax, Marriott, Not-Peyta, regulatory fines, silent cyber… There has been a significant increase in the amount of really smart people focused on the problem and more and more information being collected and calibrated to a rapidly increasing event set. More and more use cases for how to operationalize RQ insights have been developed and are being implemented. Risk takers have increased sophistication in leveraging their data better, and to think more scientifically about how they can use data to make better decisions faster to thrive in this complex but growing market. While there has been progress, there is a real need to adapt tools for the SMB segment, for businesses that lack the expertise and experience to manage and mitigate the costs of this increasing exposure. We will see considerable change and progress in RQ tools made as firms adapt their data collection and models to keep pace with the risk.

In Summary…
We want to thank Mr. Armstrong and Ms. Eichenseer for their thoughtful analysis and guidance on this extremely complex issue. Our carrier partners and their policyholders are very interested in cyber RQ and trying to get a basic sense of the frequency/severity quantum. The goal is to safeguard intangible assets residing in thousands of decentralized databases and storage locations, often controlled by third and fourth parties, all of which are combating daily stealth robotic probes from countless morphing threat vectors. To say this is a challenge would be wildly understating the task at hand. Whew! We will continue to look for expertise like theirs as the cyber RQ practice develops.


Tags

Related Blog Posts

Download 2023 Cyber Claims Study

The annual NetDiligence® Cyber Claims Study uses actual cyber insurance reported claims to illuminate the real costs of incidents from an insurer’s perspective.

Download

© 2024 NetDiligence All Rights Reserved.