According to Morgan Stanley the “Cyber insurance market” will be soon worth be worth $8bn-$10bn. This number alone is a good indicator that shows a current trend: more and more entities seem to understand and accept that computer systems (eventually) get hacked, either by technical flaws, unpatched systems, or often overlooked, human error made by end-users or experts, or often a combination of all these and other means. Due to ubiquitous computing and ever growing data collection, these hacks or breaches are increasingly affecting all companies and more general all our lives.

It is folklore among the security community that it is easier to attack a system than to secure it. One rather intuitively accessible explanation for this is that while someone trying to secure a system has to get everything right, an attacker only needs to find a single flaw in the whole system. While overall system security seems to have drastically improved in the last two decades, the impact of hacks seems to be ever increasing, too. Enterprises are becoming more and more aware of this issue and eager to protect themselves, their reputation and their intellectual property, from cyber-attacks.

While a cyber insurance might be one of the means with which companies can protect themselves from the damage created by a cyber-attack, enterprises seem to be left without advise or tools to make decisions, like how to prioritise investments to take in order to improve the security posture of their systems, or, how to quantify the current risk. In other words, enterprises seem to struggle to find the right balance between costs and benefits, especially as systems and the risks they are exposed to, seem to constantly change and evolve.

This is exactly where Hermeneut aims to come for the rescue. It is a European Union Horizon 2020 research project that tries to develop “a holistic risk assessment model and approach to cyber-security cost-benefit analysis”. It also tries to incorporate some aspects which are often underestimated; among those are the above mentioned human factors, the role of intangible assets but also the strategies attackers deploy to find vulnerabilities and assets at risk.

Our main contribution is a called Benefit Harm Index (BHI). The document is written in a language accessible for cyber security experts and economists alike. Where appropriate, it explains the economic context, history and how BHI diverges from certain Orthodox economic reasoning, drawing inspiration from the radical economics of the Austrian School. By providing a way to quantify and compare the system’s benefit growth vs. the system’s harm growth, it aims to improve significantly the estimation of the consequences of cyber-attacks.

While this blog post does not intend to present the details of the BHI, it highlights some aspects of the methodology. Written by Paul Galwas and Brian MacAulay, the report combines insights from cyber security and dynamic economic modeling integrating the important contribution intangible assets have on growth (as well as some methods borrowed from physics). Quantifying growth of harm and benefit is achieved by classifying it into certain complexity classes, respectively. Similarly to how computer scientists classify complexity of algorithm running times with regard to the input. For the BHI, a straightforward way to compare these classes of growth is to map a number or “level” to each class. To calculate the BHI for single time in point you simply subtract the “benefit level” from the “harm level”. If the outcome is positive, it indicates that the benefit will eventually be overtaken by the harm, if negative, the opposite is true.

Our approach is not to calculate such a number for a single point in time but rather for relative trajectories and, conscious of the potential for emergence, to monitor periodically. To achieve this we can compute the components of the relationships (basically a difference between two numbers, each indicating the complexity class), but project the resulting numbers over time.

The BHI identifies levels of control in terms of identifiable risk and how this can be managed. Risk mitigations imply that the system can be controlled in the presence of threat actors, so that their threat is reduced or effectively removed.

The BHI methodology also proposes a taxonomy for the “vulnerability level” of a system.

Vulnerability levels or classes range from unauthorised access, uncontrolled inputs to stochastic systems, and peak in the worst case of a new emergent state of the system. We detail model states of a system in terms of a given scope and phase space with a given resolution, and uses this as a measure of its intrinsic lack of controllability, from the perspective of the defenders, who legitimately operate the system. Threats and vulnerabilities to components in the system vary fundamentally by class. Each vulnerability level requires radically different types of mitigation. The BHI report details examples of these. The vulnerability level of a component may be changed by reconfiguring components in the system. Some levels of vulnerability must be mitigated across systems, e.g. in the ecosystem.

While this is no silver bullet to all problems in cyber security, the approach showcases a brilliant building block of the Hermeneut methodology which aims to foster a culture of risk management for cyber security.

If this spiked your interest we recommend to catch up on the press release which will be published by June this year. If you are interested in the technical details the report on the BHI will be published on our homepage, too.

Stay tuned, follow us on Twitter and Linkedin.


Related reading:

Crypto & Game Theory:

One of the few cases of a secure “system” and some interesting thoughts around that:

Estimating revenue by spam: