In the wake of the 2008-2009 financial crisis, financial institutions in general, and banks in particular, have faced a heightened regulatory scrutiny, a more muscular and intrusive style of supervision, and substantially more onerous capital and liquidity requirements. Some institutional changes, such as the enhanced role given to Central Clearing Counterparties, have revolutionized the environment in which important financial contracts are executed. Whilst this is not the place to discuss whether the implementation of these measures has been effected in the most effective way, there is broad agreement, however, that, as a result of these changes, the financial world is less prone to a repeat of the sort of cataclysmic events that characterized the months after the Lehman’s demise.

 Author: Professor Riccardo Rebonato

Published: 17 December 2021

In the wake of the 2008-2009 financial crisis, financial institutions in general, and banks in particular, have faced a heightened regulatory scrutiny, a more muscular and intrusive style of supervision, and substantially more onerous capital and liquidity requirements.

Some institutional changes, such as the enhanced role given to Central Clearing Counterparties, have revolutionized the environment in which important financial contracts are executed. Whilst this is not the place to discuss whether the implementation of these measures has been effected in the most effective way, there is broad agreement, however, that, as a result of these changes, the financial world is less prone to a repeat of the sort of cataclysmic events that characterized the months after the Lehman’s demise.

Financial buildings, Frankfurt skyline at dusk - Macrorprudential Matters

Pragmatically useful as these reforms have been, one cannot help feeling that a great regulatory opportunity has been missed. Arguably, the once-in-a-century nature of the 2008-2009 events should have called for a root-and-branch rethinking of how financial regulation is conceived in the Western economies. Understandably however, the urgencies of the moment also called for immediate action. The result has been a patchwork of reforms that have been applied to a regulatory substratum ill-suited to receive them.

As memories fade or are selectively retained, the impetus for a more fundamental change to the philosophy of financial risk management has waned, and we are already observing political and corporate pressure to push back on many aspects of the newly introduced regulation. What we are left with are therefore pieces of financial regulation that were “written in the trenches”, hastily grafted onto an alien regulatory body, and that, as a consequence, are prone to be attacked and severed.

I intend to argue that, unfortunately, the underlying premises that underpin financial regulation have not changed – and that, just as unfortunately, they are as faulty now as when first formulated. Their failure in 2008 is a matter of empirical observation. However, it stems from a selective application to the regulation of parts of economic theory, with total disregard of other equally important insights from economics. I, therefore, believe that the pillars onto which the new rules have been built are not just empirically shaky: they are also theoretically dubious.

What are the failures I am referring to? The symptoms of these failures are, for instance, the continuing recognition of internal risk models as the primary tool for the determination of regulatory capital. The underlying causes of these failures run deeper: they are to be found in the joint assumptions i) that the management of risk in financial institutions can be considered separately from the ‘payoffs’ of the actors engaged in the risk management actions; and ii) that market professionals will always be better than any regulator is modelling the risks the face. According to this naïve view, there is an objectively ‘correct’ way to manage risk, and, given the technical superiority of practitioners, the only task of financial regulation is to ensure, via the judicious dispensation of capital carrots and sticks, that these optimal-for-everyone risk management actions are properly carried out.

In practice, the project has failed spectacularly (with internal model recognition framework becoming a de facto byword for capital arbitrage). But it is not just a matter of an empirical failure. One simple observation should have been ringing alarm bells in the many post-mortems conducted after the defaults of 2008-2009: dishonesty was not necessary for the disastrous outcomes that unfolded to occur: rule-abiding cunning was more than enough (and, indeed, exceedingly few senior executives were prosecuted, let alone ended up in jail – there was no sinister conspiracy at play here: quite simply, the rules had been followed, alas all too well). This is a clear prima facie indication, in my view, that the project was doomed in principle, and this is what I intend to show.

The root of the problem is that the desirability of a risk management action – an action, that is, aimed at reducing, at a cost, the probability of an unfavourable outcome – cannot be analyzed in isolation of the broader payoffs of the actor. A course of risk management action that is perfectly rational and justifiable – indeed, optimal – from the point of view of the steward of the interests of the shareholders of a bank can be very detrimental to the interests of the taxpayer of the state where the same bank operates. Calculating the risk better, or to more significant figures, does not change the fact that both CEOs and shareholders on one side and taxpayers on the other face ‘convex payoffs’, but with opposite convexity. Limited liability is at the heart of the positive convexity enjoyed by CEOs and shareholders. The fact that taxpayers have to face the bail-out costs of a bank’s risks turned sour explains the negative convexity of the taxpayers’ payoffs. Simplifying greatly, but not to the point of rendering the analogy unusable, CEOs and shareholders are long a call, taxpayers are short a put. See Fig 1. Delegating the task of protecting the welfare of the short-put holders to the holders of the long call does not make any sense, no matter how clever and sophisticated the long-call holders are. Arguably, if they are too clever and sophisticated in the narrow optimization of their welfare, this could well be even more, not less, detrimental for the taxpayers.

Fig 1: A stylized representation of the payoffs accruing to taxpayers (orange line labelled “Payoff_put”) and to the CEOs/shareholders (blue line labelled “Payoff_call”) as a function of the net asset of a systemically important financial institution.

Simply put, this decisional asymmetry comes from the very nature of convex payoffs. The expected payoff of a 50-50 gamble (what choice theorists call a ‘fair lottery’) is above the initial level of wealth for the holder a call (the traders, CEOs and shareholders), and below for the short-put holders (the taxpayers). As a result, the optimal course of action for the rational call holders implies taking far more risk than what is optimal for the taxpayers.[1] Yet the ideology that practitioners ‘knew best’ dictated that regulators should, by and large, only opine on the integrity of the process by means of which the call holders would reach their decisions. Regulators effectively stood out of the way. The more intrusive regulatory stance of the post-crisis years has made regulators more demanding of the quality of process, and has raised the bar for some measurable hurdles (such as capital or liquidity), but has not changed the underlying philosophy. The outcomes of the 2008-2009 events should therefore not be seen through the lens of a morality play: what happened is simply what had to be expected from the actions of rational and clever utility optimizers.

These considerations are both obvious and far-reaching. Yet, the reconstruction of ‘what went wrong’ has embraced a very different narrative – a narrative that could be dubbed the ‘sorcerer apprentice syndrome’. In 2009, reflecting on how the ‘enlightened self-interest’ of the stewards of a bank could have got it so wrong, Greenspan came up with the following observations: “It is clear that the levels of complexity to which market practitioners, at the height of their euphoria, carried risk-management techniques and risk-product design were too much for even the most sophisticated market players to handle prudently.”[2] In a closely-related vein, black-swan explanations (“risk is just too difficult to quantify”) made the failure of the sorcerer apprentice easier to understand. Both these analyses, I believe, are wide of the mark: the call holders did not fail because their programs became too complicated to handle, or because return distributions have power-law tails. They failed because they took one gamble too many – but still a gamble that, ex ante, it was rational for them to take. And, needless to say, the short-put taxpayers suffered far more severely the consequences of this ‘bad bet’.

Arguably, the characterization of the interplay between long-call holders and short-put taxpayers I presented above is simplified to the point of caricature. As with all good caricatures, however, it still captures the essence of the problem. It has been argued, for instance, that the risk-taking of financial institutions is essential to bring about the financial innovations from which all of society will benefit. The payoff for the taxpayer is therefore not fairly represented by the orange curve in Fig 1, the market-efficiency apologists say, as it should also contain a call-like component. Now, I would not go quite as far as ex-Chairman Volker, who famously quipped that the last financial innovation to benefit non-bankers was the ATM machine. However, whether the ever-more complex financial structures that have flourished in the new millennium have truly improved the welfare of society or have mainly facilitated the extraction of rents by the financial sector should be a topic of dispassionate empirical analysis, not a truism to be taken for granted. Capitalism accepts inequality as the price of increased overall welfare; but this increase in welfare must be proven, not postulated.

Another common objection to my argument is that the enlightened self-interest of the managers of a financial institution (the ‘franchise value’ of running a tight ship) would in practice align more closely their decisions-making with the interests of the taxpayers than what myopic utility maximization would suggest. There is some theoretical validity to the point. However, the institutional landscape has allowed and still allows blatant and pervasive conflicts of interest to persist. The main line of defence against potential abuses has been self-regulation (again, the ‘practitioners-know-best’ mantra at play). This state of affairs has effectively ensured that franchise value would count for very little, and short-term gains for much more.

In which way could financial regulation in practice be different? A good start would be to recognize that setting capital on the basis of internal risk models is intrinsically flawed. The naïve hope on the part of the regulators had been that granting more favourable capital treatment to banks with approved internal risk models would improve their practice of risk management (by the way, if the managers were so self-interestedly enlightened, why did they need the prodding in the first place?). In reality, internal model recognition has been embraced by banks as a fantastically effective engine for capital reduction. Indeed, banks had been facing similar risks for decades (if not for centuries), but began devoting more and more resources to their risk models, with CROs literally moving from the basement to the corner office, only when the link with regulatory capital become explicit with the first Basel accord. This regulatory framework has been infinitely tinkered with but has not been repudiated – witness the recent obsessive interest of banks in modelling counterparty credit risk. The idea of having a risk-sensitive capital allocation is not flawed. However, regulators should set the standards for these models in a prescriptive way. ‘Model improvements’ – the Trojan horses that launched a thousand capital reduction initiatives – should be initiated by the regulators themselves, not by the parties who will enjoy the leverage afforded by the attending capital reduction.

Lack of space prevents me from going into further analysis – of the idea, for instance, that bringing more and more products onto the trading book would subject their risk management to ‘market discipline’ has been part and parcel of the ‘markets-and-practitioners-always-know-best’ ideology. The results have been the hasty upfronting of profits (with the attending payment of non-recoverable bonuses) and creation of ‘fair weather’ assets, that could enjoy a light capital treatment in calm market conditions, and then had to be shipped to the ‘bad bank’ as soon as the waters got a bit choppy. My more general point is that simply abandoning the ‘light touch’ approach has been a welcome but incomplete attempt to rethink the regulatory ideas that brought us to the 2008-2009 crisis. What still has to happen is the recognition that optimal risk decisions are not made in an institutional vacuum. Regulators should therefore start by taking the intellectual and decisional ownership of the risk models that contribute to the determination of capital of systemically important financial institutions. They should then revisit with an open mind all the regulatory provisions (such as management of conflicts of interest, or accountancy practices) that have been built on the unquestionable axiom that ‘markets know best’.

The outcome will be far from perfect (regulators have their payoff functions as well!), but will hopefully be better aligned with the interest of the taxpayers whom regulators have the fiduciary duty to protect. There may well be some costs in terms of financial efficiency. However, the existing regulatory philosophy has made the boundaries between innovations that benefit society and clever structures that enhance rent extraction dangerously blurred. Surely, this must change.

References:

[1] Risk aversion does ensure that even call holders (as long as they cannot hedge their ‘delta’) will not assume infinite risk. However, the fact remains that the optimal risk they will take far exceeds the risk that the taxpayers would accept.

[2] Distinguished academics chimed in: “When a bridge collapses, no one demands the abolition of civil engineering. One first determines if faulty engineering or shoddy construction caused the collapse. If engineering is to blame, the solution is better–not less–engineering. Furthermore, it would be preposterous to replace the bridge with a slower, less efficient ferry rather than to rebuild the bridge and overcome the obstacle.” Shreve, 2008.

Author

  • Professor Riccardo Rebonato

    Riccardo Rebonato is Professor of Finance at EDHEC Business School and EDHEC-Risk Institute. Professor Rebonato is a specialist in interest rate risk modelling with applications to bond portfolio management and fixed-income derivatives pricing. He sits on the board of directors of the International Swaps and Derivatives Association (ISDA) and the board of trustees for the Global Association of Risk Professionals (GARP). He was previously Global Head of Rates and FX Analytics at PIMCO; prior to that, he was global head of market risk and global head of the Quantitative Research Team at the Royal Bank of Scotland (RBS), and sat on the Investment Committee of RBS Asset Management.