At the Systemic Risk and Aggregation panel, hosted by NetDiligence on June 30, 2020, moderator Richard DePiero (Sompo) laid out the problem: large loss scenarios – their likelihood, severity, and impact on portfolio – introduce a level of uncertainly that is unacceptable from a risk management control perspective. How then can aggregation for cyber risk bring uncertainty to a manageable level, in both cyber and non-cyber lines and across both affirmative and non-affirmative policies? Panelists Emma Watkins (Lloyd’s), Kelly Castriotta (Allianz), Danielle Smith (RMS), and Yakir Golan (Kovrr) took a deep dive in a panel that was both highly technical and ultimately enlightening.
Watkins set the stage with definitions and a brief overview of how Lloyd’s has tackled aggregation since 2012. Systemic risks are those which impact multiple insureds or multiple syndicates in a significant way, occur simultaneously across a region or industry, and have the potential to adversely affect insurers through a high volume of claims or stress on liquidity and capital. To address this risk in the cyber space, Lloyd’s built scenarios to “stress test” the market, initially using historical scenarios provided by syndicates to track risk, measure market growth, and identify specific exposures. Eventually, Lloyd’s shifted to a system by which they would define their own scenarios in a top-down fashion to enable the collection of a more regular data set for analysis. In 2020, Lloyd’s pushed out three updated scenarios: Business Blackout, Cloud Cascade, and Ransomware Contagion. Workshopped with the help of vendors, these scenarios were developed to be extreme but plausible, prescriptive enough to be aggregated, and suitable for all syndicates.
As modelers, both Smith and Golan weighed in on the advisability of using bottom-up modeling instead. Smith advocated that a bottom-up approach provides greater accuracy, noting that it allows modelers to use “what’s actually there” to discover the perils and aggregate upward through an industry. Golan agreed that bottom-up modeling describes risk with greater precision, but that in cases where data is not available, a top-down approach can help to validate or improve a model. Smith’s description of RMS’s model for scaling IT loss demonstrates this principle, showing how understanding the difference between vectors that spread through ongoing human activity will be far easier to contain than those, like worms, which do not. Similarly, Kovrr’s “virality factor” looks at the angles of attack that provide automatic and massive propagation.
Castriotta noted that struggles still exist in applying scenarios and that the process is highly detailed and occasionally tedious. Examining policy wording, setting accumulation frameworks, clearly defining what is measured and how it is cataloged are all necessary for good underwriting. Affirmative stand-alone cyber lines tend to show the best and broadest understanding. Furthermore, more advanced underwriting tends to ask for more and better data from clients, which in turn enhances the application of scenarios. She also noted that risk can be modeled in stages, with different models showing more effective results in different lines.
Data collection was further addressed in consideration of AI and machine learning. Smith and Golan expressed optimism in the potential for AI for research, data collection, data cleansing, and data augmenting in industries that simply have fewer data to provide.
A final roundup of future trends stressed that everyone’s goal is to quantify risk so that it can be allocated and accurately priced. As modeling evolves, so too will the discussion around it. Certainly, the NetDiligence community will look forward to hearing from these experts again.