Pulsen Integration > The death of the Heat Map

Capisce

BLOG POST: Kevin Libbos is Head of Business Advisory Nordics at Pulsen Integration. In this blog post he explains why Heat Maps should be avoided.

Is your organization still using Heat Maps to identify and action cyber threats? Well, you’re not alone.

Research indicates that 70% still use these ambiguous, color-coded, non-quantitative matrices to make critical decisions that are based completely on subjective opinion.

Douglas Hubbard, a world-renowned cyber-security expert says: *‘**’They are a failure, they do not work.’’*

Dr. Tony Cox doesn’t mince his words either: ‘’Heat Maps are worse than random!’’

I don’t agree that Heat Maps are complete failures or worse than random, as they generally encourage organizations to at least contemplate and identify threats. However, that’s the single, limiting value that I can really observe in them.

Heat Map ambiguity is a biproduct of people’s perception. One person’s perception of a high threat is often perceived as a medium or low threat to the next person. You often see this in a businessperson’s assessment versus an IT person’s assessment, as they have different priorities and needs.

So, which is it? Red, yellow or green? And how do you truly collect all participants' risk scores of high, medium or low and accurately assign this threat to one ‘very limiting box’ of high, medium or low?

The problem is obvious, there is no range; only a color-coded box to place your threat in.

There is also a second problem – Heat Maps give no indication of the actual cost of the threat.

Your CEO is truly interested in how much cyber threats will impact budgets and whether the company is within its risk appetite (can the company afford the costs that are likely to be incurred?). If it can’t afford these likely to-be incurred costs, how can the risk be lowered to decrease the likely costs?

A Picture (or Number in this Case) is worth a thousand words:

There is nothing beautiful about this graph; there exists no pretty color-codes that are in perfect symmetry, creating an illusion of accuracy.

Instead of trying to create art – think about how much your CEO will value an end-result that includes expected total loss, and the point at which inherent risk converges with risk tolerance (the probability that you will spend more than what you can afford).

The graph is an actual recent customer case. As you can see, expected total costs for the next year on cyber threats is 185.000 USD (with a 90% confidence interval) and there exists a 12% chance that the company will spend beyond it’s means (more on cyber threats than what can be afforded).

I promise that your CEO will spend more at the next auction on this graph than the beautiful Heat Map!

I prefer to use Hubbard’s one-for-one substitution model, which uses math and unambiguous probabilities of likelihood and impact that can be used in computer simulations, to properly "add up" and compute the risk and cost of each threat.

Essentially, Hubbard follows Monte Carlo methodology, also known as **multiple probability simulations.**

It is perfect to use when facing uncertainty in probabilities, which we all know to be the case with cyber threats. Rather than replacing an uncertain variable with a single average number (or color), it plots each possible outcome in the range in order to observe the distribution and ultimately determine the probability based on simulation runs for all the possible outcomes within that stipulated range.

**The Steps:**

- Identify Critical Systems
- Identify Threats per System
- Assign a Threat Probability
- Assign a Lower & Upper Bound for Cost
- Let the Monte Carlo Simulation Go to Work

Steps 1-2 are no different from constructing the traditional Heat Map, so I will explain steps 3-4.

- Assign a Probability

There are several variables to consider when assigning probabilities to specific threats, and they are;

- Specific Threat History at Your Company
- Specific Threat History Using Industry Data
- Specific Threat Trend (is the threat increasing or decreasing in frequency)
- Diagnostics and/or Pen Testing
- What Does Your Cyber Security Knowledge Experts Say

Considering that this data is easily and readily available, you should be able to provide a confident estimate for probability. The beauty of Hubbard’s model is the recognition and acceptance that you won’t be 100% correct, so it uses a 90% confidence interval in its calculation. Better yet said,

‘I am 90% confident of a 30% chance of a ransomware attack at our company within the next year’

- Assigning an Upper & Lower Bound Cost

This part of the exercise can seem daunting if it’s your first time, but in fact costs can be estimated quite easily and quickly. The first step is to break down and decompose all possible costs for a threat.

Time: User Downtime, Investigation Time, Notification and Monitoring time, Other.

Direct Costs: Information Disclosure, Legal-GDPR, Financial/IP Theft, Reputation and other Interferences.

A decomposition table such as below could be used and summated into the Monte Carlo simulation;

Estimating costs is a learning process and some take more time and experience to determine vs. others. For example, a common mistake, if inexperienced, is to estimate the upper bound of GDPR Fines to be 4% of revenue. Do not do this!

GDPR fines are broken down into four categories and a fine equal to 4% of total revenue never really happens. Depending on your company’s reaction to an incident, transparency, openness with authorities and how you solve the incident are all factors that would establish a fine level. Even in worse-case scenario (level 4), the default line is 750.000 Euros. So, reflect on your maturity in these areas to determine a probable fine level.

Another example is User Downtime. Ask yourself; if a server is down, how fast could we get it up and running again, and what is the longest we could potentially be down for?

At first you might say, I really have no idea, it is much too complicated to predict.

Dig a little bit deeper. Ask yourself what is the longest a server has ever been down before? 72 hours perhaps? And the shortest? 30 mins maybe? You might respond; yes, that sounds about right, we have been down 72 hours before, but it’s very rare. Also, we could be down even longer than that in an especially bad, but even rarer, situation.

Again, I come back to the beauty of Monte Carlo and using a 90% confidence interval. The range becomes 90 mins to 72 hours, leaving a 10% chance that it could be shorter or longer than expected. Better yet said;

‘I am 90% confident that user downtime would be between 90 mins and 72 hours’

This range is then applied to the Monte Carlo simulation for all possible outcomes.

The moral of this story is that cost calculation might seem daunting at first, but with a little practice and experience, the estimates can be very precise, and this is infinitely more valuable to everyone at your company.

So, kick those heat maps to the curb, give your CEO and company some real value. Determine the true probability and costs of cyber threats, which will make your rationale for mitigation strategies and projects of the future all the easier!

The time has come to take that beautiful Heat Map down from your office wall!

It is ‘The Death of the Heat Map!’

Yours sincerely;

Kevin Libbos

Head of Business Advisory.

BY: Kevin Libbos

Kevin Libbos is Head of Business Advisory Nordics at Pulsen Integration.

KONTAKT

**Kevin Libbos**

Head of Business Advisory - Nordics

Pulsen Integration AB

share

**CONTACT US**

Head office:

Nils Jakobsonsgatan 5

Borås, Sweden

Postal: Box 881,

SE-501 15 Borås, Sweden