some systems actually benefit from shocks; a kind of opposite of the black swan idea for which he is best known. Taleb’s argument is that nature is brilliant at design. “Evolution doesn’t forecast” he says, unlike economists and finance professors. It is shocks, changes in climate or the availability of food, that cause new species to emerge. Nature also builds in a fair degree of redundancy into the system to guard against shocks – we have two lungs and two kidneys for example. Nature doesn’t try to optimise.
His analysis of the crisis boiled down to three points. The first is that there was an increase in hidden risk and exposure to low probability events. Too much debt was taken on and debt forces you to be very accurate in your forecasts since a small mistake can ruin you. The second point was that there were asymmetric incentives, allowing traders to bet against low probability events with other people’s money; that’s why there are lots of rich traders and poor investors. The third point was that there was misunderstanding of tail risk, fostered by financial theories such as value at risk, which he described as the biggest charlatanism in intellectual history. Having these models was worse than having no model at all, “like a pilot flying over the Himalayas but having a map of Saudi Arabia.”
This is familiar for any engineer who’s torn his hair out trying to make things work reliably in a stochastic (that’s a fancy word for “fucked up”) world. Nature’s messy: there is no way to actually predict how things are going to turn out. You can try (after all, that’s pretty much why we invented statistics), but at the end of the day, you don’t really know unless it’s happened. You can make the systems you design and build (everything from a light switch to an ICBM) more reliable but a perverse feature of that reliability is that failures have much larger consequences.
You can actually design anything to be 99.999% reliable but “it’s that last 0.001% that’ll kill ya!” You could try “pushing the 9’s” (i.e., make things 99.9999% reliable, and so on) but that gets ludicrously expensive pretty fast. Trying to make things more reliable, you also end up being at the mercy of predicting how nature will try to kill you but nature always has the upper hand. Murphy’s Law says that Anything that can go wrong will do so at the worst possible moment. As any engineer can tell you, the Engineer’s Corollary to Murphy’s Law says: Murphy was an optimist.
Therefore, engineers are faced with two somewhat conflicting choices:
- Become a complete risk-averse freak, or
- Try designing robust systems
Most engineers make pretty successful careers choosing the former but the more interesting ones always choose the latter. This is essentially what Taleb is on about. One branch of engineering (most notably led by the work of Geniichi Taguchi and added to by work on Six-Sigma) suggests that you can design more reliable systems by:
- Making individual system components more reliable using Six Sigma & Taguchi techniques, and
- Making the overall system better able to absorb individual component failures without a catastrophic system failure
This has been done in a number of disciplines but the best example I can give is from my former profession (ITAR watchers, don’t worry, this stuff is in the public domain): the US Space Shuttle can be (in case of STS Challenger, has been) brought down by individual failures in its heat absorbent tiles. The competing Soviet/Russian Soyuz launch system family is clunky and rugged to the point of being crude but also happens to be the most successful launch system ever. Part of its success is due to the fact that it can absorb individual system failures so well.
That brings us round to the question posed at the beginning. Can we use all this in investment management? The short answer would be probably. We can institute systems like FMEA (Failure Modes and Effects Analyses) to look at every new position in the portfolio, with regular FMEA updates of the portfolio itself. But that process tends to be slow and methodical and may actually slow down investment decision-making. Therefore, the quick way would be to reduce the effect of tail-probability events on the portfolio. The best way to do that is to reduce leverage.
Leverage can be good, especially if you’re trying to grow a “real economy” business into new areas. It would also be stupid to ignore leverage in an investment when the cost of leverage is very low. But leverage makes the portfolio short volatility: it makes the portfolio dependent on things remaining stable. That in turn implies that whenever there are stresses in the financial system, your portfolio leverage is going to kill you. A converse of this thinking is that if you think a particular investment return is unattractively low without leverage, then you’re probably better off without the investment. Not easy to live by when you live by 2 & 20 (2% of AUM and 20% of returns).
- Fragility and rapping (economist.com)
- The Most Highly Improbable Lawsuit Threat of All? (reason.com)
- Black Swans (spectrum.ieee.org)
- ‘Black Swan’ Author Says Investors Should Sue Nobel for Crisis (businessweek.com)
- Mandelbrot and Taleb on the Financial Crisis (scholarlykitchen.sspnet.org)