Investment Strategy: Could you use Taleb’s Anti-Fragility in a Portfolio?

Nassim Nicholas Taleb
Image via Wikipedia

Nassim Taleb of “Black Swan” fame is now talking about “antifragility.”  According to the Buttonwood column, Taleb thinks that:

some systems actually benefit from shocks; a kind of opposite of the black swan idea for which he is best known.  Taleb’s argument is that nature is brilliant at design. “Evolution doesn’t forecast” he says, unlike economists and finance professors.  It is shocks, changes in climate or the availability of food, that cause new species to emerge.  Nature also builds in a fair degree of redundancy into the system to guard against shocks – we have two lungs and two kidneys for example. Nature doesn’t try to optimise.

His analysis of the crisis boiled down to three points. The first is that there was an increase in hidden risk and exposure to low probability events. Too much debt was taken on and debt forces you to be very accurate in your forecasts since a small mistake can ruin you. The second point was that there were asymmetric incentives, allowing traders to bet against low probability events with other people’s money; that’s why there are lots of rich traders and poor investors. The third point was that there was misunderstanding of tail risk, fostered by financial theories such as value at risk, which he described as the biggest charlatanism in intellectual history. Having these models was worse than having no model at all, “like a pilot flying over the Himalayas but having a map of Saudi Arabia.”

This is familiar for any engineer who’s torn his hair out trying to make things work reliably in a stochastic (that’s a fancy word for “fucked up”) world.  Nature’s messy:  there is no way to actually predict how things are going to turn out.  You can try (after all, that’s pretty much why we invented statistics), but at the end of the day, you don’t really know unless it’s happened.  You can make the systems you design and build (everything from a light switch to an ICBM) more reliable but a perverse feature of that reliability is that failures have much larger consequences.

You can actually design anything to be 99.999% reliable but “it’s that last 0.001% that’ll kill ya!” You could try “pushing the 9’s” (i.e., make things 99.9999% reliable, and so on) but that gets ludicrously expensive pretty fast.  Trying to make things more reliable, you also end up being at the mercy of predicting how nature will try to kill you but nature always has the upper hand.  Murphy’s Law says that Anything that can go wrong will do so at the worst possible moment. As any engineer can tell you, the Engineer’s Corollary to Murphy’s Law says:  Murphy was an optimist.

Therefore, engineers are faced with two somewhat conflicting choices:

  1. Become a complete risk-averse freak, or
  2. Try designing robust systems

Most engineers make pretty successful careers choosing the former but the more interesting ones always choose the latter.  This is essentially what Taleb is on about.  One branch of engineering (most notably led by the work of Geniichi Taguchi and added to by work on Six-Sigma) suggests that you can design more reliable systems by:

  1. Making individual system components more reliable using Six Sigma & Taguchi techniques, and
  2. Making the overall system better able to absorb individual component failures without a catastrophic system failure

This has been done in a number of disciplines but the best example I can give is from my former profession (ITAR watchers, don’t worry, this stuff is in the public domain):  the US Space Shuttle can be (in case of STS Challenger, has been) brought down by individual failures in its heat absorbent tiles.  The competing Soviet/Russian Soyuz launch system family is clunky and rugged to the point of being crude but also happens to be the most successful launch system ever.  Part of its success is due to the fact that it can absorb individual system failures so well.

That brings us round to the question posed at the beginning.  Can we use all this in investment management?  The short answer would be probably.  We can institute systems like FMEA (Failure Modes and Effects Analyses) to look at every new position in the portfolio, with regular FMEA updates of the portfolio itself.  But that process tends to be slow and methodical and may actually slow down investment decision-making.  Therefore, the quick way would be to reduce the effect of tail-probability events on the portfolio.  The best way to do that is to reduce leverage.

Leverage can be good, especially if you’re trying to grow a “real economy” business into new areas.  It would also be stupid to ignore leverage in an investment when the cost of leverage is very low.  But leverage makes the portfolio short volatility: it makes the portfolio dependent on things remaining stable.  That in turn implies that whenever there are stresses in the financial system, your portfolio leverage is going to kill you.  A converse of this thinking is that if you think a particular investment return is unattractively low without leverage, then you’re probably better off without the investment.  Not easy to live by when you live by 2 & 20 (2% of AUM and 20% of returns).

3 thoughts on “Investment Strategy: Could you use Taleb’s Anti-Fragility in a Portfolio?

  1. Shocks come along so infrequently that it would be imprudent to engineer the financial system to benefit from them. There are only a few organisms that evolved this way and likewise we only need a few buyers of long vol. Empirica Capital happens to be one.


    1. The financial system — any financial system — has already “engineered” itself that way. Probably the best way to explain it is with an engineering analogy:

      Pilots always say “ground has kill probability of one” — in other words, if you hit the ground, the ground wins. But ground — the earth — hasn’t “engineered” itself that way. It just is. As the designer of a military aircraft, all I care about is avoiding that kill probability, especially if I can get the other aircraft to take that fall.

      Similarly, as a portfolio manager, all I care about is avoiding the shocks the rest of the system is prone to — and it is so prone to! That part can be engineered.

      Of course, in the unlikely event I was a fiscal/financial policymaker, I would so love to engineer the financial system that way too. I’m pretty sure you can’t engineer all the risk of shocks away, but I’m also pretty sure you could dial down the effect of those shocks.

      Does that make sense?


Leave a Reply

Please log in using one of these methods to post your comment: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s