Visesh Gosrani, Chair of the Institute and Faculty of Actuaries Cyber Risk Working Party, explains what a group of actuaries find helpful from Cyber Model vendors at the moment and what vendors can do to make their models indispensable.

VisEarlier this year, before we all entered self-isolation, actuaries from the Cyber Risk Investigation working party held a roundtable at which they discussed their views on the solutions from model providers they had seen to date. The purpose of the roundtable was to provide guidance to the modelling providers, as it was felt that a closer, more constructive working relationship between the two would lead to better outcomes.

It was clear that none of the participants see a future without specialist cyber models. The models have increased the level of understanding of cyber risks and are essential to providing insurers with an understanding of the risks they take on. When comparing newer cyber-model vendors with established catastrophe-model vendors, it was felt that the newer vendors provided greater data on insureds and aggregation paths. It was also felt that the cyber security insights were likely to be better in the models from the newer vendors but accepted that collaborations between new and established vendors could close this gap. However, the group all had, and continue to have, teething troubles in their relationships with vendors, which were hindering the usefulness of the models. This article distils the key themes from the discussion and explains why the requests are important to the users.

The users wanted clearer analysis on what caused results to vary at different points in time

One feature missing from the newer model vendors, and which could be improved for the established model vendors, was the ability to perform an analysis of change between runs. This is important because it enables the user to have a better understanding of why results have changed. Examples of useful splits are the changes due to model methodology, model parameterisation, additional exposure volume, and in the riskiness of the portfolio companies. The participants considered this particularly important when using the model for regulatory reporting of Realistic Disaster Scenarios and explaining the differences between RDS submissions to management.

The users wanted to be able to have a deeper dialogue and understand what action was taken in respect of their feedback

Users had all seen their understanding increase over time and, with it, questions arising in areas they had previously not questioned or been aware of. They recognised that they were asking numerous questions about the model in order to ensure that they could respond appropriately to management with regard to the model and the results it was providing.

Given the fast-evolving nature of the models, the participants understood that documentation might take time to catch up to the standard they required. They all agreed that a better level of base documentation than was currently available should be possible. When requested to, the vendors were able to explain some of the concepts and assumptions that would be covered in that base documentation.

The lack of documentation was felt to be a bigger issue if the provider was taking their own approach to an RDS. This need for documentation was driven by the need to rationalise the differences between 1) results from an approach prescribed for those without a vendor model, and 2) the results of a vendor solution that did not require assumptions in all areas and thus could fit a portfolio more closely.

The users also expressed concern about the difficulties they’d experienced in validating the outputs from the newer cyber-model vendors. This was more important as they recognised the difficulties for the vendors when structuring and parameterising their models.  When using both the risk selection and portfolio management functions within modelling software, all participants had experienced results that conflicted with expectations but had struggled to obtain the assistance to better understand the outputs. Better dialogue here would enable trust to be built in the models and also facilitate validation and improvement of the models.

Finally, users felt it was difficult to know how to ensure that feedback was acknowledged and responded to. There was a strong desire to create an open, two-way relationship. Participants understood that the model providers could not promise features, but an explanation of how a request might be dealt with, or more discussion on why an alternative solution might be appropriate, would foster improved working relationships.

Explaining the rationale and likelihood of deterministic scenarios in greater detail would help users understand the modelling

Given the difficulty in understanding the range of cyber events, the participants expressed a preference for focussing on improving the estimation of deterministic event outcomes as a step towards a better understanding of probabilistic outcomes. Participants believed the approaches taken by the different vendors varied significantly in terms of data collected and models used. This was considered positive, as it was felt that having a range of views contributed to improved understanding. Engaging further on the information provided by vendors’ models would increase the credibility.

Users agreed that the cyber risk exposure does not change significantly over a one-year period for the majority of organisations, although it was understood that the threat environment might change over this time. As a result, users felt that if it were possible to include an analysis of change between model runs (which by nature requires the ability to rerun old model versions with older data) by having less frequent model and company data updates, this could be an acceptable trade off.

Users need to be able to validate the models

The users struggled to see a future where they could embed models for which they were unable to validate results. Given past experience, they considered that the status quo could continue for some time. They also thought that the changes to meet these requirements would take significant effort on the part of the vendors. They recognised that there is no clear-cut solution.

Conclusion

The feedback had two overarching themes. The first addressed the use of the models and the efforts users were making to better understand the messages from the model outputs. The second was the feedback loop back to the model vendors in order to ensure that the models continued to become more aligned with user requirements.

All the users were prepared to engage at greater length with the vendors and would welcome further discussion and engagement on the points above as well as other, more individual, points that arose.