Uncertainty in security risk analyses

Uncertainty can be defined as the unquantifiable likelihood of an event, which could be either in the past or future. Certainty on the other hand can be defined as having absolute knowledge of an event. For example, to be certain that your computer was not infected by a virus in the past, you would have had to of understood, observed and verified every bit of code that was ever run on your machine since it was created. Equally, to be certain that your machine will not be infected in the future, you will have to understand, observe and verify every bit of code that may run on your machine in the future, as well as having complete knowledge of the past too. Both of which are impractical and impossible to a certain extent, no one has neither the time nor knowledge to succeed at such a task.

It can be argued however that it is easier to be certain about an event that has already happened rather than an event that hasn’t happened yet. This is because we have no knowledge of the future but may have knowledge of the past.

It feels good to be certain; it’s precise and allows you to build order that can be used for prediction. It is very hard to make decisions when you have uncertainty, and as humans we are not good at making decisions when we have uncertainty. We either play down the uncertainty or play up the uncertainty, but rarely take a balanced approach.

We can drastically reduce the level of uncertainty for a given event if we have close to perfect knowledge of every dynamic that affects that event. In general we are certain about most events both in the past and future of which we have significant experience and knowledge of, but should be very clear about events we are uncertain about due to lack of knowledge.

When it comes to security risk analyses I think it is best to be pragmatic so that we may produce some meaningful and beneficial results. Otherwise we might find ourselves trying to figure out every single possible threat and vulnerability, however unlikely, and then trying to protect against them all, which in reality is impossible.

To be certain about all security risks applicable to a system, you would need perfect knowledge of all threats, vulnerabilities and the system you are performing the risk analyses on. This is impossible as systems are complex, heterogeneous and dynamic. Misunderstood dependencies, business processes, boundaries of a system and levels of integration to name but a few introduce uncertainty in the knowledge of a system. Software continually changes, and with those changes vulnerabilities can be unknowingly introduced. Existing unknown vulnerabilities are also frequently being discovered, likewise, new attack methods are being introduced as vast computing resources are becoming more attainable by adversaries.

It is also important to note that the history of security incidents has no effect on future incidents. We cannot use historical data to be certain about events that may happen in the future. There is simply no way you could predict against an unknown vulnerability, you would have to exhaust every possible eventually, which is again is not possible.

We can however use historical information to protect against known vulnerabilities, and give us an idea on the likeliness and frequency of attack.

The risk analyses itself can also be subject to following uncertainties,

  • Misunderstood system information.
  • Unpredictable user behavior and natural disasters.
  • Measurement below the limit of detection.
  • Inclusion of irrelevant information.
  • Lack of weighting or wrong weighting of results.
  • Measurement bias originating from experience, knowledge and perception of risk.
  • Motive of risk assessment.

Perfect knowledge is simply not possible, therefore it is crucial to respect the concept of uncertainty when performing risk analysis, since there is uncertainty at every stage of risk analysis.

Quantitative risk analysis attempts to define certainty as a number. There is no room for uncertainty in quantitative risk analysis, this because mathematical formulas are used to generate a range of risks, ranked by value. This requires the generation of values for each attribute measurement. Values need to be defined for various attributes like loss of productivity, frequency and likelihood of attack, all of which are not inherently expressed by a numerical value. Each measurement used in the analyses is expressed as a numerical value.

The results of a quantitative risk analysis do not express uncertainty very well, system owners are presented with a specific set of ranked risks, each clearly defined by a value. Which is problematic as this does not give a complete picture of the reality of uncertainty.

Therefore it is critical to state and justify the reason behind each value for every single component, attribute and function used during the risk analysis. This allows the level of uncertainty to be implicitly expressed in the narrative of the risk analysis.

Tweaking of numbers can be used to over or under exaggerate the risk, therefore it is important to understand the weighting used and the types of functions used to calculate risk. For example loss of service is likely to be tolerated far greater than the loss of confidentially, so we could not use the same function to express these attributes.

On the other hand the concept of uncertainty does not go against the methods used to carry out a qualitative risk analysis, in fact this method recognizes there is uncertainty and the results of the analyses are usually expressed through a non-numeric value. There is no attempt to be precise when hypothesizing on the likeliness or impact of an attack; uncertainty is not hidden in any way. Certainty is expressed through a non-numerical probability using for example, low-medium-high or a scale from 1-4.

As in quantitative risk analysis, it is important to state the justification for the scale used and the reason for the values chosen, so that the system owner can learn the assumed certainty/uncertainty assumed.

Everything we do in life has some form of risk associated to it, as the famous adage goes “Whatever can go wrong, will go wrong”, so risks exist whether you can quantify when something will happen or not, doesn’t matter.

Being uncertain about a security event where the impact to business is negligible would be somewhat an acceptable uncertainty. But if life and great financial loss is at risk it is important for us to understanding the threats and vulnerabilities as much as possible to reduce the uncertainty of risks involved and try and figure out how to treat them.

I believe uncertainty can be reduced when trying to determine risks when the window of time we are looking at is defined and short. This is because we are only concerned about a small portion of the future, which can be predicted much easier than predicting what will happen from now until the end of time.

For example we can be certain that the risk of an account that is protected by a strong password being comprised by way of a brute force attack in 7 days is very low. But we would be uncertain of the risk of the same account being compromised in 7 years, this because there is more time for things to happen. However the account could still have been compromised within 7 days by way of brute force attack if someone was hiding a quantum computer we didn’t know about, but this is highly unlikely.

I say this because you cannot be certain when talking about the future in any way shape or form.

Instead of trying to figure out every single thing that can go wrong and when, my opinion is that it is better to focus on how you are going to react and respond for when things do go wrong. Of course it goes without saying that you must protect against all known known’s, that way uncertainty plays a smaller role.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>