Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
'Black swans' and 'perfect storms' become lame excuses for bad risk management (stanford.edu)
34 points by spathak on Dec 19, 2012 | hide | past | favorite | 50 comments


This is one thing to me which seems to me to be a genuine failing of an efficient market. Over the medium term, ignoring the low-frequency, high-risk event gives you a margin over your competition. Hence you succeed at their expense and they fail/get bought out by you etc.

As a simple example (the general argument applies to all sectors and all forms of risk), consider a bank ("safe bank") keeping $X in reserve to handle unforeseen events. Another bank ("risky bank") which keeps only $X/2 in reserve.

Risky bank will have a advantage over safe bank at all times except when an event requiring between $X/2 and $X occurs. Should such a rare event occur, risky bank would fail and safe bank survive.

Once the frequencies of such events drop to low enough levels (once ever 5? 10? 20? 40? years), there is no market pressure on the riskier bank to plan for the problem (and in fact the opposite, the market will destroy safe bank).

The actual time period is I think determined by how long it takes risky bank to out-compete safe bank and so drive it from the market as a significant force.


Three banking models:

1. There is not much in the way of market pressure in the "Great Moderation" banking system (weak regulation, deposit guarantees) to drive out risky banks - they are a net gain for investors (ignoring the agency effect of bankers screwing their shareholders through bonuses), since bank failures represent a form of subsidy to the financial system. Some shareholders get some of the stock wiped out, but it is all limited liability; others profit magnificently.

2. The libertarian banking system that we saw in C19th (little regulation, no systematic deposit guarantees) does seem to have the right market pressure, but it seems to be yet more unstable since we saw it face regular bank runs followed by grand-scale financial collapse - the 19th century was more economically unstable than the 20th century for this reason, with the UK (then dominant in finance) having 5 major financial crises; one of which (the 1873 panic) led to a depression lasting 2 years longer than the Great Depression. So your frequency's "low enough levels" seems to be around every 20 to 25 years.

3. So deposit guarantees of institutions in exchange for effective regulation that avoids the dangerous effects of excessive leverage, so outlawing your risky bank, seems to be the way forward. Unfortunately it is in the interests of these financial institutions to subvert regulation, so designing such regulation is hard ("who could have forseen that the banks accounts could have mispriced risky assets and moved gigantic liabilities off balance sheet yet again?")

This isn't really a Black Swan issue, it's an agency issue where neither depositors nor regulators understand what banks are doing. But Black Swans do tend to crowd around important, hard to figure out areas of endeavour.


Sorry, I was only using banks as an example. My attempt to state the general case here:

https://news.ycombinator.com/item?id=4942341

I think it is exactly a black swan issue. The markets punish companies which do not compete sufficiently effectively.

Companies which plan for low-probability events are less effective in the medium term than companies which don't.


> This is one thing to me which seems to me to be a genuine failing of an efficient market.

You seem to assume that all risk is meant to be carried by the banks. As you note, such a thing is possible (up to the risk-bearing capacity of any given bank), but such banks would be very expensive. Consequently, most customers bank with riskier institutions and this places more of the risk back onto them.

So, in actual fact, this is the market doing what it does pretty well: solving a hyper-distributed problem with heterogenous agents with numerous complex, incompatible preferences.

Sometimes we don't like the outcome. That doesn't mean that the market has "failed"; it just means that we don't like the outcome.


The market is a solution to a problem (or perhaps a set of problems). In the example of in the grandparent's post, that solution does not adequately deal with the problem at hand. This makes it a bad solution.

Even if we can't think of a better solution right now, we shouldn't stick our collective heads in the sand when we notice errors in our current approach. Highlighting errors is important, because even if we can't fix them right now, we may be able to in the future. If we don't know about the flaws in our current approach, it it impossible to even attempt to think of ways to improve them.


The market is an emergent phenomenon. It's not a designed institution. Hayek talks about this misattribution as the root cause of a lot of misunderstanding.

People do think of ways to adjust for things about the market that they don't like. Those adjustments are generally imposed from outside by force of law and quite a few of them are later modified or removed because they had seriously unpleasant side-effects (such as the total disappearance of a market).

Being angry at an emergent phenomenon like markets is like being angry at the weather, or upset about evolution. It's pointless.


There are certainly aspects of our market economy that are emergent. Trade is as old as humanity itself, but not all aspects of the economy are so old, nor as immutable.

The modern corporation, for example, is a relatively new invention. Depending on your definition, anywhere between a few centuries and a few decades old. Compared to the time scale on which human evolution has taken place, that's practically nothing. In those decades or centuries, we and our ancestors have chosen particular shapes for our economy. Not all of those choices are final.

Whether the choices that influenced this particular example can be changed, I don't know, nor do I feel qualified to hazard a guess. However, I stand by my original point: it makes no sense to refuse to consider how we might improve the system.


Modern humans have never refused to consider how to improve everything.

But it is important to realise when we're licked. We can't solve the TSP in linear time, we can't travel faster than light and it looks like -- in both theory and practice -- markets are better at solving economic problems than planned alternatives.


The problem is not restricted to banking.

The problem is that any two competitors X and Y, in any field.

If X does not plan for low-probability failure and Y does, then Y will not have the additional inefficiency/overhead and so X will out-compete Y.

If the time frame for low-probability failure is long enough, and if being out-competed means the end of your business, then an efficient market means that risks which typically take longer than time T to manifest will not be handled, where T is the time for X to out-compete Y, given their advantage.


What's your point? Customers will choose the mix of cost and risk that makes them comfortable.


I guess that my point is that anyone who thinks that a market will give them long-term stable institutions/companies is wrong.

I think that conclusion is likely to be surprising/controversial to some/many people.


> I guess that my point is that anyone who thinks that a market will give them long-term stable institutions/companies is wrong.

Definitely. It's a complex, dynamic system that requires enormous amounts of failure, misattribution and foolish optimism to work.

The beautiful thing is that it turns these human inevitabilities from negatives into positives.


Once consumers/investors decide that the risks are sufficiently low over the time horizon they care about, they will not mitigate risks over longer time horizons.

This isn't just a failing of an efficient market - it's a "failing" of any system with a time horizon.

Politics has similar incentives (election in 2012, who cares about 2013?), as does software development (who cares about code maintenance after I leave), management, etc.


I agree with you main point but I do think I should pick you up on the use of 'efficient' which I think is wrong in a technical way.

In an 'efficient' market all future risk is included in the analysis of current value meaning that if it was an 'efficient' market this would actually not be a problem. However it is just one of many* ways that markets are not in reality 'efficient' in the economics sense as the assumptions required to prove them just do not match up to the real world.

* http://www.amazon.com/Debunking-Economics-Revised-Expanded-D...


In an 'efficient' market all future risk is included in the analysis of current value meaning that if it was an 'efficient' market this would actually not be a problem.

The EMH claims that all future time discounted risk known to market participants is included in the analysis of current value.

If you want to show the markets are not efficient, go achieve excess risk-adjusted returns. The sole claim of the EMH is that you can't do that. The EMH doesn't claim that market participants will not take risks, nor does it claim that investors/customers will not apply time discounting to those risks.


You need to avoid lumping all economists into a single heading. It's easy to debunk abstract models by saying "they're not a model of the real world!"

Well, yes. The economists know that.


The book is worth a read. Alarming large numbers of economists base work working up from these models and theories based on them.

You might have noticed a little recent global credit crunch which was not predicted by most economists but was predicted and modeled by the author of the book I linked who is an economist (so I don't lump them all together) but the overall level of the state of economics is so poor as a discipline at understanding the overall economy it should be embarrassing to them.

I am genuinely interested if someone has a critical analysis of the Debunking Economics book that points out how and where it wrong but when I last looked I couldn't find any serious attacks online (minor nitpicks only) but I think it is largely being ignored by those who disagree so I haven't found their counter arguments.


Economists don't predict most events, because most events in economics are unpredictable. It's a subject of complex, chaotic systems.

If you take this as your starting point for critique (the weathermen didn't predict Weather Event X!!!), you will always win the argument because you're beating up a strawman. No economist has ever seriously claimed specific predictive power.

All an economist can give you is generalised statements of causality, most of which will be unobservable. Steve Keen was not the first to point this out. Quite a few economists from various schools have picked flaws with general equilibria models of macroeconomic phenomena (ie, using calculus to describe people, markets and countries).


The weathermen can give you probability ranges for various events and their models include the possibility of storms. Many widely used economic models do not have the capability to model crashes at all. Many models don't cover banks, debt or money at all.

I don't expect a model to tell me that the markets will crash tomorrow but I would have expected widely use models to indicate that we were in dangerous period in 2005-2007 and that the upwards path was impossible to sustain over a 10 year period.


Economists can give probabilities too; it depends on the type of model used.

Predicting a crash immediately before it happens is not so difficult. Lots of economists were clanging the alarm bells all through the mid-00s. Predicting exactly when and exactly what the trigger would be? Basically impossible. Economists don't do that (it's left to advisors, pundits and newsletter salesmen).


Agree exact timing is not identifiable but the amount of warnings before 2007 was really pretty low. Maybe there are some that can be added to this list and it would be interesting to see if there were any with very different approaches:

http://www.debtdeflation.com/blogs/2009/07/15/no-one-saw-thi...

So there were a few but they generally weren't in the mainstream of economics (from Krugman to the Chicago School) which generally did a very bad job. Roubini did call it but none of the others on the list I linked to were people I had heard of before 2008 (but I haven't formally studied economics).

Many of the common models taught and used never indicate crashes, this sort of thing should just be thrown out. Most of them also don't really include the financial industry (including debt).


She obviously fails to grasp what Taleb calls a Black Swan, or ignores his definition for some cheap publicity: "The attacks of 9/11 were not black swans, she said. The FBI knew that questionable people were taking flying lessons on large aircraft."

They are the perfect example of a Black Swan. Quote Taleb (Black Swan, p.xxii): "... in spite of its outlier status, human nature makes us concoct explanations for its occurence after the fact, making it explainable and predictable", which is exectly what she is doing. And, one page later on 9/11: "had the risk been easily conceivable on September 10, it would not have happened".

He is not advocating abandoning risk management, he is in favour of risk management that doesn't need us to predict the future, as it is harder to reliably estimate the likeliness of very unlikely events.

Concerning the second example of earthquake risk and nuclear power plants: That, again, is a post-hoc rationalization. Everybody knew that earthquakes were a risk factor for NPP and was planning accordingly. They were just not planning for a Tsunami this size, as it was very, very unlikely. So including higher error margins for earthquakes next time is nice but not enough. Taleb on this: http://www.valuewalk.com/2011/03/nassim-taleb-black-swans/

Consider the aviation industry: After an accident, they find the root cause and eliminate it. It is now something expected, and can be directly dealt with. But also try to improve the system (e.g. via training) to be more robust towards all the root causes the didn't anticipate.


> They are the perfect example of a Black Swan.

I disagree.

The black swan is something that is completely unforeseeable and for which there are no previous partial or complete examples, either of the final outcome or the contributing causes.

Which is her point: 9/11 was foreseeable from the information (the failure was in connecting it) and the fact that a previous, similar plot had been tried in France. The clues were there and there was a previous partial example.


Taleb's definition is subjective. From wikipedia:

1. The event is a surprise (to the observer). 2. The event has a major effect. 3. After the first recorded instance of the event, it is rationalized by hindsight, as if it could have been expected; that is, the relevant data were available but unaccounted for in risk mitigation programs. The same is true for the personal perception by individuals.

Thanksgiving is a black swan for the turkey but not for the butcher. Taleb is trying to solve the problem of how not to be a turkey, and prediction is insufficient for that problem, because errors of prediction will let the worst events through anyway. Rather one should evaluate how much worse things will be as an event grows. If problems grow faster than linearly, trouble will strike eventually.


I thought I replied to this. I guess I didn't.

Taleb's model seems to miss the distinction between two properties of predictions: discrimination and calibration.

Discrimination is the correctness of the forecast of a single event. Did X happen?

Calibration is the closeness of fit between the predictions made and the distribution of outcomes. Given predictions X1, X2 ... Xn, how closely do the probabilities fit outcomes Y1, Y2 ... Yn?

Even if your calibration is very good, there are always outliers which will upset your model. You didn't discriminate them.


> The black swan is something that is completely unforeseeable and for which there are no previous > partial or complete examples, either of the final outcome or the contributing causes.

Not according to Taleb, who should know as he developed and named the concept. According to him a black swan only has to be unpredictable to the observer.

But the point is that ignoring the possibility of unpredictable (to you) events is a fallacy. So a black swan event is any event an observer assumes is impossible, or so unlikely as to be ignorable, but then actually happens. Whether an event is a black swan, being an observational fallacy, is subjective.

So taking the 9/11 attacks, for some professionals in the intelligence agencies these were not black swan events because they knew they could happen, but for the population at large and to politicians in particular, they very much were black swan events.


> So a black swan event is any event an observer assumes is impossible, or so unlikely as to be ignorable, but then actually happens. Whether an event is a black swan, being an observational fallacy, is subjective.

I see.

I'm not really sure how that helps in any discussion of risk management at all, except to remind risk assessors that a) they should keep looking and b) they'll fail anyhow, just going on the numbers (the set of potential outcomes being unbounded, nobody will ever foresee all eventualities).


The point of drawing attention to black swans is to make it clear that you can't plan your way into a situation where you've avoided all risks. So, you're right. It doesn't help much past that.

This is obvious in many areas - nobody believes we can avoid all fires, for example (or we'd defund fire departments and cancel our fire insurance).

Yet in other areas there tends to be a view that we should be able to perfectly avoid risks.

This is dangerous, because a lot of black swan events that might be impossible to avoid, can still be substantially reduced in magnitude if you apply the other main forms of risk management to a greater degree to supplement avoidance.

Even when we don't mean to, it is easy to get thrown into this pattern of reacting to a specific risk that is in our face right now rather than to address broader methods of mitigating classes of risks.

Consider if prior to 9/11, past experience with dozens of hijackings over decades had been used to consider how to reduce the potential impact of a hijacking, and there'd been locks on the cockpit doors.

It seems far more feasible that effort into investigating risk mitigation by considering possible wide catching responses to known risks (past hijackings, mentally unstable passengers, illness) could have stopped 9/11 this way, than that more effort into risk avoidance by pouring money into intelligence agencies would have led to forever preventing someone from pulling off a 9/11 type plot, for example.

This is why this idea is important. But you're right: It is just a reminder of what ought to be obvious, that we can't avoid all risks because we can't possibly predict them all.


> nobody will ever foresee all eventualities

Not quite. It means that just because you can't imagine how your nuclear reactor might go into catastrophic failure, you still need to have a bullet proof, well rehearsed, properly funded response plan in place.

It means that just because you don't think there can be a simultaneous collapse of all your major banks, you still need to have robust capitalisation requirements, diversification of your financial institutions risk profiles, a plan on how to deal with the failure of institutions that are 'too big to fail', etc.

It means that heavily optimising for current stable conditions is a mistake, no matter how sure you are they will remain stable indefinitely.


But there's an infinite regress. How will your "bullet proof, well rehearsed, properly funded response plan" "go into catastrophic failure"?

What is the point where additional planning is counter-productive?


> The black swan is something that is completely unforeseeable and for which there are no previous partial or complete examples, either of the final outcome or the contributing causes.

There is no such thing as something completely, entirely 100% unforeseen or unforeseeable. Someone, somewhere in the world will be seeing it coming... just as some intelligence analysts saw 9-11 coming, some economists saw the financial crisis, etc. etc. What we're considering is what the prevailing and strongly established consensus well outside the fringe areas says.


This is where we start to wander into problems with classical set logic.

No, I'm not joking. Fuzzy sets are pretty much required for any meaningful discussion of "failed", "foreseeable" etc, if they are to be at all useful concepts: http://chester.id.au/2012/04/09/review-drift-into-failure/

Reduce foreseeability and failure to a binary toggle and you destroy enormous amounts of information with high utility so that some syllogisms still work. Wasteful.

Taleb is a very intelligent man, but AFAICT he does often reinvent existing concepts with much cooler names. "Antifragile" sounds awesome. "Robust" sounds boring.

Take "black swan", for example. Given the technology of the day, black swans simply didn't exist. Iain Banks called these "Outside Context Problems", one might also call them "paradigm-busters".

Anyhow. I should have padded out my original definition with the usual legalese about "reasonably foreseeable".


"reasonably forseeable" is hard to define. Let us, for a moment, use being able to implement countermeasures without seeming like a crazy lunatic as a definition. As a politician in a pre-9/11 world, if you had done what would have been necessary to prevent a 9/11 (not go and arrest the very guys who did it, that would have required perfect foresight), i.e. add bulletproof doors to planes, implement the very invasive searches the TSA does etc., maybe ground commercial aviation until those measures would have been in place, it would be the end of your career. The remote possibility of a terrorist attack has only become a justification for about everything after 9/11. Also, 9/11 was just one of many terrorist threats at that time, it's just in hindsight that we consider this particular one inevitable. Edit: Spelling


Hijackings had happened many times. Including with weapons. There was plenty of knowledge that could have easily justified adding secure, lockable cockpit doors. And that would have been a good, low impact way of mitigating against known risks while also taking a whole host of unknown risks off the table or substantially reducing them, including reducing the chance of 9/11 having the impact it did or even being tried.

I think that is one of the changes that more focus on mitigating risk vs. avoiding it might have produced. E.g. when faced with heightened risk of terrorist attacks, a rational response would be to not focus so much on identifying potential attackers and stopping a specific plot, but spending more resources at looking into low impact ways to mitigate the consequences of at various broad modes of attack actually getting underway.

The specifics of 9/11 was probably near impossible to predict, but another eventual hijacking was a near certainty, and an eventual building collapse for whatever reason should have been considered a near certainty, as many high-rises have failed over the years too. The failure to reduce the potential impact of those known risks are the real failures of 9/11, not the failure to prevent the specific plot, as these were broad, known risks and the actual causes leading up to them would be largely irrelevant for a lot of mitigating actions.


> "reasonably forseeable" is hard to define.

It certainly is. The concept of negligence was reinvented or reintroduced into the Common Law starting in 1932 with Donoghue v Stevenson and the lawyers have been wrestling with it ever since. I hated Torts as a student, it's a damn fiddly area of law. Give me Trusts any day of the week.

Still: if you want to study how intelligent people have mapped out the concept of "reasonably foreseeable", then lawyers -- particularly Scots lawyers who also look to Roman law -- are the people to talk to for inspiration.


> Taleb is a very intelligent man, but AFAICT he does often reinvent existing concepts with much cooler names. "Antifragile" sounds awesome. "Robust" sounds boring.

Antifragile != robust: "The antifragile is beyond the resilient or robust. The resilient resists shocks and stays the same; the antifragile gets better and better." (http://www.randomhouse.com/book/176227/antifragile-things-th...)


I don't think this is a meaningful addition. Resilient systems are not static, they evolve; that's how they become resilient in the first place.


Relabelling is the marketing game; to get your businessy-idea-book in the best-sellers list you need everyone to replace "practice makes perfect" with "10000 hours".


Yep. And in fact marketers didn't invent relabelling (Tallyrand had a witty remark that one of the functions of the State is to rename odious institutions and I would not be surprised if Confucius had this phenomenon in mind when he got grumpy about the Rectification of the Names).


"some economists saw the financial crisis"

At least one Head of Risk at a large bank not only saw the financial crisis coming but was sacked for trying to point this out:

http://www.telegraph.co.uk/finance/4582535/Senior-HBOS-execu...


> The black swan is something that is completely unforeseeable and for which there are no previous partial or complete examples, either of the final outcome or the contributing causes.

As far as someone is concerned. You misunderstand what a Black Swan event is. A Black Swan is at the root an opacity problem, and not one of forecasting. 9/11 certainly wasn't a Black Swan event for the hijackers or the masterminds, but it was for everyone else.


In hindsight you can of course always identify indicators for the coming catastrophe. Even a storm needs some build up. The problem is we can not see the indicators in time.

Otherwise they would not be black swans, they would more be like the whale falling out of the sky in the Hitchhikers Guide To the Galaxy, that has formed spontaneously out of random particles.


The point of engineering approach to risk management the author makes while suggesting Black Swan phenomenon is something Taleb is actually suggesting by making systems anti-fragile in his new book.


We call this phenomenon "black elephants."

You have an elephant in the room. After it explodes, everyone will say it was a black swan.


Like the Chinese housing bubble?


I've said it before and I'll say it again. If you want to understand more about this (especially economically), Daniel Kahneman's Thinking, Fast and Slow spells it out really well:

http://www.amazon.com/Thinking-Fast-Slow-Daniel-Kahneman/dp/...


One major reason why I disagree with the article. Social phenomena are impossible to fully compute as human beings do not just apply to comprehensible laws of physics, but there is a whole different dynamic of behavioral and social factors involved that makes statistic modelling a great deal more complex. Unlike the author I also do not think that this problem can just be resolved with technical means, there will always be uncertainty about human behavior and thus Black Swans.


Ya, this part irked me the most:

"Traditional financial analysis, she said, is based on evaluating existing statistical data about past events. In her view, analysts can better anticipate market failures – like the financial crisis that began in 2008 – by recognizing precursors and warning signs, and factoring them into a systemic probabilistic analysis."

So, let's say you do provide a systemic probabilistic analysis about the impending education crisis the US is about hit? Don't you think a government would be gnawing their hands off to get that type of statistical analysis? Personally, I don't think it systematically exists.


http://fooledbyrandomness.com/ForeignAffairs.pdf [pdf warning]

Is a really good geopolitical paper I recently read about black swan stuff. The general gist, to my understanding, is that artificially supporting a regime makes it weaker.


Taleb's emphasis in his latest book is on avoiding the naive attempt to predict the unpredictable. Instead, focus on identifying and making things "Antifragile" so that they are resistant to (or even benefit from) events that would otherwise be destructive. With that in mind, risk management becomes much less about guessing about the future. Instead, the focus is to identify the fragile (that which is highly susceptible to disruption) and take steps to make it antifragile.

http://www.amazon.com/Antifragile-Things-That-Gain-Disorder/...

It is an enjoyable read so far...




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: