Analysing Cyber Value-at-Risk
As the volume of cyber-attacks continues to rise and also the levels of harm suffered from them, it is becoming critical that organisations can demonstrate that reasonable efforts are being undertaken to reduce cyber-risk. However, the risk responses and controls typically viewed as necessary, and even essential, by the professional and expert community are generally not underpinned by any framework that facilitates rigorous reasoning, qualification or quantification of the benefits resulting from their deployment. This means that the real value of compliance, or variability of compliance, to risk-control standards is not well reasoned or measurable in a scientific, unambiguous or verifiable sense.
Academics within Oxford’s Computer Science Department have partnered with AXIS Insurance Company (previously Novae Group plc) to tackle this challenge and probe the effectiveness of current defences against cyber-attacks, and the standards set by international bodies which businesses use to measure the sufficiency of their cyber security efforts. In their first project, they proposed a model that defines the associations between risk controls on the one side and assets, Cyber-Value-at-Risk (CVaR) and the different types of cyber-harm which may occur in a typical organisation on the other. The research also included an initial validation of the model and core analysis through interviews and focus groups with industry professionals in the areas of cybersecurity and cyber-insurance.
[See: white paper]
In the second, ongoing project titled "Analysing Cyber Value-at-Risk", the aim is to develop the model and provide a means to calculate CVaR by:
- conducting research and experimentation into understanding the following from an organisation’s perspective:
- Is our approach to integrating distinct datasets relating to assets, risk controls, harms and risk likelihood viable and does it have utility? Can we predict consequences of risk-control adoption in terms of risk management and potential exposure to harm? Can this result in useful advice and recommendations in respect of risk-control practices?
- What aspects of the model are we particularly sensitive to, where might we have acute need for accurate data and precisely what form does it need to take? If we need to facilitate its collection, how might or should that be done?
- Is the data collected by companies nowadays enough to accurately estimate the probability of losses, or should a new claim form be designed to cover their needs for better prediction?; and
- developing a prototype analytical system which implements the modelling approach, including the structures and formalisation from the first project, and allows data to be input into a tool suitable for experimentation.
[See: white paper]