# All for one and none for all! Diversification, regulation and the tragedy of the commons.

Some proverbs come in contradictory pairs, for instance “Too many cooks spoil the broth” and “Many hands make light work”. I’d like to present an example that I feel illustrates two of these simultaneously. Everyone knows “Don’t put all your eggs in one basket”, including banks, which diversify by holding many different investments of different types. But while it may be in banks’ best interests to lower their levels of risk through diversification, it may plausibly raise the risk of a system-wide failure: if all banks follow that same advice, the rest of us may be “In for a penny, in for a pound”.

This is the argument given by Beale et al. in their paper Individual versus systemic risk and the Regulator’s Dilemma. Here’s a simple example that illustrates the main idea, adapted from a related comment in the earlier paper Systemic risk: the dynamics of model banking system.

Some people are playing a dice game. Each of them rolls a single fair die once and receives £1 for rolling a 1, £2 for a 2, and so on, up to £6 for a 6. However, afterwards they each have to pay £1.50 for the privilege of playing this rewarding game. Each of them starts out with no money, and so if they roll a 1, they go bankrupt.

If you play this game, there’s a $\frac{1}{6}$ chance that you’d go bankrupt on your roll. Let’s say ten people play the game: the chance of all of them going bankrupt on their single throw is $(1/6)^{10}$, which is about one in sixty million.

Instead of this, we could let the players do something more innovative: each of the ten could buy a $\frac{1}{10}^{\text{th}}$ stake in each of the dice. In essence, this means each player still pays £1.50 to play, but receives one tenth of the sum of the numbers showing on all ten dice. On average, they would still expect to make the same amount as in the straightforward version: the expected return is a gain of £3.5-£1.5=£2 per roll in each case. However, each player’s probability that they go bankrupt is now about one in sixty thousand: this is the chance that the total number of spots showing on the ten dice is under 15. That’s ten thousand times less likely than in the first scenario, so any player would rather choose this less risky, diversified option.

If the ten dice total to 28 as above, then each player receives £2.80-£1.50=£1.30.

What about the chance that they all go bankrupt? If one goes bankrupt, the total number of spots showing is less than 15, but that total is the same for all of them. If one of them goes bankrupt, then they all do at the same time: so the odds are the same, one in sixty thousand. Under the first system, this was much more unlikely to happen, a thousand times less so, with odds of one in sixty million.

This scenario matters when our ‘players’ are banks or financial institutions, the dice are various assets, and bankruptcy (or near bankruptcy) adversely affects the whole of society. There’s nothing particularly special about the numbers, probabilities or distribution I gave here: the key feature is how much ‘correlation’ there is between each bank’s or player’s finances. In the first case, the fortunes of one bank had no relation to that of the others: we say there was no correlation, or that they were independent. In the second case, there was full correlation: if one bank collapsed then all of them did. It’s in this second situation where society is “in for a penny, in for a pound”, despite the banks spreading their eggs into many baskets.

The tragedy of the risky commons

This is essentially the tragedy of the commons (or the prisoner’s dilemma), under the assumption that all banks failing simultaneously is an unacceptably worse outcome for society than just one or several failing. It would mean that each bank acting in its own best self-interest leads to a poor outcome for society as a whole (possibly including the banks themselves). If so, then perhaps regulators might have to ensure that banks have sufficiently dissimilar investment portfolios.

However, all this depends on how bad we judge multiple failures to be, compared to a single failure. Losing one bank already creates a lot of turmoil (like Lehman Brothers in 2008), and itself spreads instability through the financial system, like a virus amongst an animal population (although the virus spreads faster in a genetically uniform population). The Systemic risk paper makes this analogy more rigourous by modelling the interlinked financial institutions using the mathematics of disease spread from mathematical biology.

The paper is entitled The Regulator’s Dilemma, because, to decide whether to act, the regulator must first answer questions along the lines of: how much worse than losing one bank is losing two or three or all banks? If we decide that losing each additional bank costs society ‘only’ twice as much (so losing all ten costs us about a thousand times as much as losing one), then in my completely unrealistic and trivial toy example, it’s still better to give the banks their independence. If the figure is more like ten times worse to lose each additional bank (meaning it costs society ten trillion times as much to lose all ten banks compared to one), then the regulator should act. (Please don’t use these numbers in the real world!)

I gave only two extreme playing options, whereas a middling strategy is almost certainly more sensible. In reality, banks are unlikely to have exactly the same investments. But there is at least some tiny grain of truth in certain circumstances. If you prefer, you could have looked at 100 investors over one year, either with each investing in one different stock from the FTSE 100 index (not realistic), or everyone buying the whole index (not so far-fetched). One difference in this situation would be that while the dice were independent, the stocks are not. Oil shares in BP and Shell might be go up or down in value together, whereas two dice rolls are completed unrelated, meaning the risk of the index is higher than for 100 unrelated stocks.

A related situation is companies taking out insurance against an employee syndicate winning the lottery and all resigning simultaneously (essentially the insurance is equivalent to just picking the same lottery numbers as the syndicate!). Though, as far as I know, no-one insures against single staff members leaving due to lottery wins.

To end with a paraphrasing of Oscar Wilde’s Lady Bracknell in The importance of being Earnest: to lose one bank may be regarded as a misfortune, to lose all of them looks like carelessness.

Mathematical Postscript

The cost model I chose was different to that in the paper: they took the cost as $k^s$ where $k$ is the number of banks and $s$ is the parameter to be chosen. This includes a wide range of behaviour, from linear ($s=1$) which means each bank failure adds an inconvenience independent of the others, or when $s>>1$, the cost of losing your final bank is steep compared to losing your second to last bank.

For mathematical convenience, in the same notation, I took the cost of $k$ banks failing as $s^k$. I’d say the former is the more useful of the two.

The total cost in the first situation is $C(s)=\sum_{k=1}^{10} s^k P(X=k)$, where $P(X=k)$ is the probability that the number of players or banks that fail is $k$. It is convenient, because $C(s)$ is the probability generating function of the probability distribution.  In the dice example, we use the binomial distribution, so simply gives us $C(s)=(ps+q)^{10}$, where $p = \frac{1}{6}$ is the probability of failure and $q = 1-p= \frac{5}{6}$.

In this simple case, we can easily find the threshold value of $s$ on which the regulator’s decision rests. The cost of the diversified situation is $D(s)=\frac{s^{10}}{60,406}$. Regulation is needed here if $C(s) < D(s)$.

We can rearrange this inequality to get $(p+\frac{q}{s})^{10} < \frac{1}{60,406}$, or approximately $p+q/s < \frac{1}{3}$. This is true if $5/s < 2-1=1$ or $s>5$. In the post above, I described the cases $s=2$ and $s=10$, sitting on either side of this threshold.