People are constantly trying to change, improve or augment on tried methods in every aspect of project management and this is no different when it comes to risk management. In this article, I will introduce you (or maybe not) to the concept of velocity and what it means when we talk about risks.
Most, if not all of you will be familiar with the concept of probability matrices in risk management and how through risk identification followed by qualitative analysis or assessment, we assign probability and impact to the risk events documented as part of our risk register. For most projects, we commonly find the risk matrix and its explanation – or what I would call the legend for its explanation – within our risk management plan.
There is a continuous debate about, and numerous articles written around the fact that this method is highly suggestive and impacted based on how we gather the information or data for it. Asking stakeholders what they perceive or believe to be the impact and probability has its share of possible flaws and variations from reality. It is up to the PM and the team at this point to assess the data and take tolerance levels of the organization and stakeholders into account. Variability in this area will need to be addressed, and this is where we look to apply precision to the data or use a formula not unlike PERT to derive a slightly weighted average number. This is all a part of quality data assessment. Before we start working really hard at fallback plans and determining contingency reserves, we want to solidify our information better.
Working with precision in risk is not something that should happen without being documented and supported by the Sponsor. You would need to state in your risk management plan within your approach that you are doing this and the reason why it is necessary to do so. Some projects will require this, while others will not.
There are several ways one can “play” with the data to try to make it more flexible to the actual context. Beyond precision in recent years, I’ve seen the addition of velocity to our formula mix. Velocity here refers to the time it takes for a risk to impact our objectives. So low velocity means you will have more time to devise a plan and react to the occurrence of that risk, while high-velocity risks can be lethal due to the fact that one has no time to respond. Velocity is very much dependent on that actual risk itself. There is no correlation with known-unknown versus unknown-unknown risks here and both sets will and may have a different velocity.
So how does this influence our data?
Using velocity lets you modify the original P (probability) x I (impact) formula and introduce a velocity factor that accelerates or increases our risk score, therefore impacting the rank and position of the risk in the overall scheme of things. The new formula with the velocity would then read:
(P (probability) + V (velocity)) x I (impact)
As an example: a risk event that would have ranked High Probability (5) and High Impact (5) on a scale of 1–5 would sit at 25. For that particular matrix, it would be the highest possible ranking risk. Now if we modify this risk event to include a factor of 5 for High Velocity, your new risk score increases to 50. In fact, it doubles up the overall magnitude of that risk. Using velocity puts more “attention” on some of the risks and therefore can take some of the surprise factor away.
Starting to use a velocity rating in our equation does not change the original fact that this is based on us asking our stakeholders and therefore is perception-based and prone to flaws. When one works with risk, one always has to remember that risk is not a reality, but that risk is a possibility. One needs to work with as factual a set of data as one can in order to be able to anticipate it as best as possible.
Over the years, people have tried to change the basic formula to account for a number of scenarios and more complex situations that can impact our risks. In many cases, it only adds to the complexity of the analysis itself without bringing in a better model from which to anticipate risks. Think of attempts to deal with watch-list risks, trigger identification as a means of prediction, and other modifications applied to risk data. The more icing on top does not make the cake better.
What I say to all of this data manipulation is that it takes expertise, knowledge, and time that is not sometimes present in a project or organization. In that case, why not stick to the basics and do that well?
If you are a PM in an organization right now doing at least the basic risk management processes and you can keep your projects one step ahead of disaster, I applaud you. I would also say that I would just keep it simply simple (use the KISS principle) and not mess around with it. If it works, why fix it?
I am a firm believer that the only other tools that can help us here are a crystal ball (that’s not a prop) and a time travel machine, neither of which I own. Do you?
Similar Content:
-
How do you know if your risk planning and strategies are working?
-
No time for risk management? Tips on how to factor risk into a Work Breakdown Structure
-
Project Management Office (PMO) – Key components to drive effectiveness