Risk Management

From expected variation to chaos; from Bayes inference to sampled data; and black swans


It's fascinating; it's essential

The fascinating thing  about risk management is the meld of the analytical with the subjective: mathematics with human factors.  Consequently, here at Square Peg we have never thought of risk management as a matter or chance or luck.

Games of chance, like the roll of the dice, are governed by immutable rules of probability: there is no opportunity for risk management because there is no means to modify the mathematical rules. Everything else  requires exercising judgment, doing measurement [or accounting] and forecasting. These are the core skill sets of risk management.

Risk management, as a business process, is a relatively late comer. Until the 'west' imported Arabic numbers in the 10th century there could be no written calculations of accounting; until the rules of probability were understood by the end of the 17th century, there could be no numerically based forecasting. But now as entities everywhere has these tools, not only for business but for the business of projects.

Risk management has always been an essential, if not central, component of  project management.

Risk management is the activist intervention of judgment and management with the mathematical description of past performance and the objective and subjective forecast of future performance, events, and conditions.

To reason like Bayes

Project managers often face the task of evaluating the plausibility of an event  that would affect project performance. Plausibility is in the spectrum of “uncertainty-to-risk”, a spectrum that reaches from “possibility —> plausible —> probable —> planable”.

For the plausible hypothesis we surmise a hypothesis that we can’t observe directly; we can only observe actual outcomes.  For example, we might hypothesize that a coin is not fair.  We can not ‘observe’ an unfair coin [unless it has two heads or two tails]; we can only observe the outcomes of testing the coin for fairness.

Putting it together, in the a priori timeframe we hypothesize a possible event and estimate its plausibility.  Then, in the posterior timeframe, we make observations of actual outcomes.  The outcomes may be different than hypothesized.  We try to draw an inference about why we observe what we do.   And we estimate what adjustments need to be made to the a priori estimates so that they are more accurate next time.
An eighteenth century English mathematician by the name of Thomas Bayes was among the first to think about the plausible hypothesis problem.  In doing so, he more or less invented a different definition of probability—a definition different from the prevailing conventional definition based on chance.  Bayes posited:

... probability is the degree to which the truth of a situation—as determined by observation—varies from our expectation for that situation.
Bayes’ idea is the plausibility definition of probability, albeit in his terms. 

Once in a million

And then there's the "black swan": the once in a million occurrence that's beyond predictability in any statistical sense.  Unpredictable risk is what we call uncertainty, but uncertainty to the point of beyond three-sigma?  What economics of risk management would prepare for such an eventuality?

Well actually, there is a concept of the "one percent doctrine" (made famous by Ron Suskind in a book of the same title):
Even if the possibility is unimaginably low, if the impact is unimaginably high, then consider the event a near certainty and take measures to mitigate.