Given the influx of major supply chain attacks and cyber war events, insurers must take on the exceedingly challenging task of quantifying Aggregation risk; supply chain attacks, risk modelling catastrophic cyber risk. We talked to Rory Egan, Senior Cyber Actuary at Munich Re, about current models and scenario generation.
NetD: How should cyber insurers go about understanding aggregation risk?
RE: They should start by attempting to identify “sources of aggregation” which if exploited or disrupted, have the potential to negatively impact many organizations, endpoint devices or individual persons. Sources of aggregation include widely relied-upon technologies, services, companies, and the paths of interconnectivity between them. From there, the aggregated loss potential in a cyber insurance portfolio from various plausible but extreme cyber events, can be further investigated.
There’s a potentially limitless number of different aggregation scenarios that could be imagined, and these constantly evolve as technology and threat actor capabilities progress. Therefore, a repeatable and structured “scenario generating process” can be helpful to create a representation of the universe of knowable potential scenarios, and to minimize “unknown unknowns.” The scenarios generated in such a process can then be ranked by importance from the point of view of the insurer, in order to determine areas of focus for its risk management and modelling efforts.
NetD: What does the aggregation loss potential really look like for cyber? Is this a completely overblown topic, or is it too large to even be insured, or are we kidding ourselves if we think we can begin to understand this?
RE: If we look at the major cyber aggregation events we’ve seen thus far, such as WannaCry, NotPetya, and more recently, SolarWinds and the Microsoft Exchange server hack, it’s easy to reimagine more extreme (but still plausible) variants of these scenarios. For example, imagine if the espionage campaign carried out via SolarWinds was instead used to deliver a data-destroying payload. The cyber business interruption losses that would be suffered, if covered, would represent a catastrophic event for the cyber insurance market.
So I don’t think it is overblown as a topic, but we are indeed working in a highly dynamic area and dealing with a lot of uncertainty. Regarding how bad it could get: This is the focus area of the cyber risk management and modelling community. There is a case for greater co-operation between leading cyber (re)insurers, risk model vendors, researchers and major technology companies, to continually improve our collective understanding as an industry. Making progress together will be a crucial factor for the continued growth and sustainability of the cyber insurance market.
NetD: What about models? How mature are they and to what extent can cyber insurers rely on them?
RE: Models are an important part of aggregation risk management, but not the only part. We need to recognize that there are certain sources of risk that could potentially be ruinous for insurers and where we’ll struggle to be able to quantify or model with the necessary level of robustness. So far, the market has identified “infrastructure failure” (e.g., failure of power and telecommunication networks including the Internet) and “cyber war” (e.g., an escalation towards repeated retaliatory massive cyber-attacks between nation-states) as areas of “unmanageable aggregation risk,” which are dealt with by excluding the risk altogether, rather than by using models.
For the “in-scope” aggregation risk, models are used with a certain level of confidence, for the specific use case of determining the magnitude of the worst-case loss in a portfolio from a given set of scenarios.
But where there is not yet adequate trust in models is for the use case of “pricing” the catastrophe risk, due to the difficulty in 1) determining the occurrence probabilities of the modelled scenarios, and 2) the difficulty in understanding the extent of “non-modelled risk” not captured by the chosen modelled scenarios. This lack of trust in probabilistic aggregation models is evidenced by the lack of a meaningfully sized retro or capital market for cyber catastrophe risk, so far.
But the models developed both internally by market leaders and available externally from model vendors have improved measurably over the past few years. Continued investment of resources, innovation and collaboration is needed to get to the “next level.” One example is the pioneering Munich Re partnership with Google Cloud and Allianz, which will allow for an improved understanding of cloud-based aggregation risk.
In Summary…
We want to thank Mr. Egan for his terrific insights here. This topic can be truly mind-boggling. Cyber risk aggregation forecasting, in my opinion, is one of the most difficult aspects to managing cyber risk for any insurer or reinsurer. Rory concisely summarized exactly why this is the case with “limitless number of different aggregation scenarios that could be imagined, and these constantly evolve as technology and threat actor capabilities progress.” Rarely does a large-scale scenario that was already predicted at a granular level actually happen. But we still need to try because even if we don’t get it right, we learn from each modelling exercise and with each real life event.
Finally, Rory’s comment about the collective need for “greater co-operation between leading cyber (re)insurers, risk model vendors, researchers and major technology companies, to continually improve our collective understanding” is spot on. This collaboration will be paramount going forward, and it is our hope that NetDiligence can play a quarterbacking role here to help bring together these diverse industry experts for the collective good.
Click here to see Rory speak at recent NetDiligence events.