Follow TopicFollow Contributor Share Feedback
The Complexity of Regulation: The Eggshell Paradox

The Complexity of Regulation: The Eggshell Paradox

by Stephen Scott, Scott Page

Jun 07, 2023

Compendium

Financial system regulation occurs within a multi-dimensional, multi-player, multi-level, complex adaptive system. Effective regulation of such systems requires “many model thinking,”1 through which regulators apply multiple, diverse frameworks to identify firm and system level risks. To ensure robustness,2 overseers must maintain an arsenal of potential responses commensurate to the broad array of potentially destabilizing risks faced by financial institutions and financial markets.3That same logic implies when the system produces new causes or dimensions of risk: regulatory attention — and the models that direct it — must be adaptive to afford new approaches to new risks.

All models are wrong, but some are useful.
GEORGE BOX (1974)

Citing lessons learned through the financial crisis, renowned economist Sir John Kay and past-Bank of England Governor Mervyn King promote a multi-model approach powerfully in their magisterial Radical Uncertainty.4 “The search for a single comprehensive forecasting model of the economy is fruitless,” they write, observing that, in the run up to the financial crisis, “The pretense that every important macroeconomic issue could be explained in terms of a single model was a major error.” 

We are not against models. Regulation requires models. Without models to discipline our logic, regulators will surely fail. Humans suffer from a laundry list of cognitive shortcomings, and models help to correct them. Models force us to clarify assumptions, to think logically, to question data, and to test causal and correlational claims. The problem, as British statistician George Box famously quipped, is that “all models are wrong.” We agree. Any single model is wrong. Many models, though, can be useful.

As Kay and King recount, in 2010, the European Central Bank released a technical paper in which it reviewed the performance of its own model for the European economy, in self-congratulatory tones. Notably, the paper did not reference the model’s performance in the 2007-08 time-period. Jean Claude Trichet, then president of the ECB, took a different view: “As a policy-maker during the crisis I found the available models of limited help,” he wrote. “In fact, I would go further: in the face of the crisis, we felt abandoned by conventional tools.”5

In this article, we argue that when the attention of regulators becomes absorbed by the task of monitoring historical causes of instability (‘known knowns’), the likelihood that they will fail to observe new risks (‘unknown unknowns’) runs high. Potential new causes of risk can include the introduction of new financial instruments, changes in behavior among depositors, or shifts in organizational culture or in the conduct of employees within firms. 

Mr. Trichet’s problem stemmed from the limited set of tools that were considered “conventional” at the time. The ECB suffered from an over-reliance on traditional tests of stability, ill-suited to the circumstances it encountered. To avoid this problem, Kay and King urge us to consider models (in the plural) as akin to tools found in a plumber’s van: “helpful in one context and irrelevant in others.” As with the tools in the van, they rightly argue, “there may be several models which contribute to the solution of a specific problem.” The regulator must have the right toolset and know which tool to apply where and when. 

THE EGGSHELL PARADOX

Devoting organizational capacity to previously existing risks, while ignoring new dimensions of risk, creates what we will call the eggshell paradox: regulators congratulate themselves for ‘winning’ the game on the dimensions they have elected to monitor — the ‘eggshell’ — while remaining blind to what is growing inside the egg, soon to burst forth.

To make the metaphor stick, our logic involves several steps. First, we want to think of banks and their regulators as involved in a competitive, complex game. By ‘competitive’ we mean that the players’ incentives differ. Banks seek profits, market share, and improved reputations. Regulators care about firm-specific and system level robustness. By ‘complex’ we mean that the game does not settle into an equilibrium as economists tend to assume.6 Financial system regulation better resembles multidimensional chess than tic-tac-toe.

The financial industry does not operate in equilibrium.

Though a ‘game’ in the formal, theoretical sense, regulatory activities, and the responses among firms that they provoke, might better be characterized as aspects of a complex adaptive system — a perspective urged by Lord Robert May, then Chief Scientific Adviser to the UK Government, and Andy Haldane, past Chief Economist for the Bank of England, following the financial crisis. We agree: the financial industry does not operate in equilibrium. It is complex, and we see this reflected in time series of major economic variables which tend to lie somewhere between ordered and random.7

What is a Complex Adaptive System?

Complex adaptive systems contain diverse agents, often operating with independent purposes. These agents interact within networks, so their actions bump into one another, producing non-linear effects at both the local and aggregate levels.8And they respond individually as well as collectively to the local and aggregate phenomena that they produce — a phenomenon often referred to as ‘reflexivity.’9

In the financial system, regulators monitor risk and set rules. Supervisors test that those rules are being followed properly. And banks respond. Bankers are unlikely to attempt to evade the rules wholesale (though some may) but they do attempt to maximize their own objectives, subject to the constraints that the rules have imposed. They also learn from others. In the run up to the 2008 crash, for instance, banks increased leverage ratios to match those of competitors. That behavior, a form of social learning, increased systemic fragility.

Through adaptation, informed by individual and social learnings, aggregate patterns emerge and take up residence within the aforementioned space between order and randomness.10 Upward and downward trends, bubbles and crashes arise at irregular, unpredictable intervals. In the formal language of statistics, complex systems produce nonstationary time series (data sets that track some sample over time).11, 12 Patterns come and go. Small fluctuations may be followed by large crashes.13 As Keynes once quipped, the only thing in equilibrium is a dead man. The financial system is very much so alive. Adopting a complexity perspective, we see fluctuations as natural and not as shocks to an equilibrium. The task of the regulator thus shifts from steering the system back into equilibrium to harnessing complexity.14

Many Model Thinking

If we take as given that the financial system is complex, then a straightforward algebraic logic explains why micro- and macro-prudential regulation requires many models. That logic relies on information theory. The amount of information in a system can be measured by minimal description length (Kolmogorov complexity). A complex system has high information content.

Describing the US financial system would thus take a while, and so we can characterize it as complex. By comparison, the models and stress tests used by regulators to monitor the financial system must be relatively simple. They have much shorter description lengths — less Kolmogorov complexity. To operate with a sufficient diversity of sensors, therefore, requires that we use many models. To rely on a single model invites disaster.

To rely on a single model invites disaster.

Decades of evidence from machine learning and statistics demonstrates that ensembles of models, on average, outperform individual models. To be effective as an ensemble, those models must be diverse: they must look at different information in different ways.

Such ensembles can be constructed specifically so as to yield robust predictions. Notably, in constructing an ensemble, one does not seek to incorporate ‘the best’ models. Instead, selection gives a boost to models that are correct when others fail.15 Such boosting reduces errors by creating many sets of eyeballs with each set looking at different information.

Adopting a complexity perspective does not advocate for replacing existing models with ecologically based models, such as those that might consider the distribution of derivatives across firms or networks of interbank loans. For instance, those models that measure internal compliance risks and responsiveness to interest rate changes through stress tests are highly valuable. However, adopting a complexity perspective would advocates for using both approaches — and even more.16

But adopting a many-model approach may be easier to urge than to accomplish. Complex adaptive systems have been usefully described as “whitewater worlds” in which actors must adapt to a swift-moving and ever-changing river, attending closely to environmental cues and constraints. Constructing new models in such a world presents a challenge: these new models require new nouns (categories) and verbs (causal forces) that capture sources of risk in this white water world. Prudential supervision demands that those models identify these new risks as they are forming.17

Yet, as argued earlier, a focus on cues that had been informative in the past means that, in a dynamic context, one may inadvertently misread one’s current position and trajectory as we ride the whitewater. In addition to developing a command of the necessary the skills, tools, models, and knowledge, therefore, one must also develop the instincts to navigate between attention and inattention. Focused and peripheral vision must always be in contested play.18

Beware the cuckoo

We are not accusing regulators of sitting still. To the contrary, we recognize that they are ever seeking to refine and improve upon existing models and that they introduce new stress tests and audit measures regularly. Our point is that these new strategies and tests tend to include familiar dimensions and to ignore ‘unknown unknowns’ and things that are difficult to measure. Improving upon existing models while ignoring potential new dimensions of risk creates an illusion of adaptive competence. That is, even though the regulator is constantly adapting, it nevertheless continues to suffer from the same blindspots. And this, in turn, increases the probability of both catastrophic risk and the magnitude of its effects.

Indulge us in an expanded metaphor borrowed from ecology. What we’re arguing here is that regulators are so busy monitoring the egg that they miss the cuckoo. One of nature’s most successful parasites, cuckoos lay their eggs in the nests of other bird species, successfully perpetuating themselves without incurring the costs of child rearing. To succeed, cuckoos must fool unwitting species to warm cuckoo eggs and raise cuckoo fledglings. They accomplish this through imprinting. When hatched, a baby cuckoo encodes the color and pattern of the eggs in its clutch. In adulthood, the bird lays similarly patterned eggs with the intent of fooling potential hosts.

Regulators are so busy monitoring the egg that they miss the cuckoo.

Now, like bank regulators, these other birds do not sit around passively. They no more wish raise cuckoos than regulators want to see risky banks operating in the market. So they, too, adapt. They devise new tests.

The redstart bird, a common victim of the cuckoo’s ploy, evolved a cognitive architecture that allows it to identify imposter eggs. When a redstart spies a cuckoo egg — one with a suspicious number, size, or spatial distribution of speckles — they boot it (well, beak it) out of the nest. Redstarts are one of nature’s many regulators looking for signs of potential risk and taking action when necessary. The depth and sophistication of the redstart’s audit is quite impressive, dare we say rivaling that of regulators on a good day. Only the most convincing cuckoo egg passes muster.

Here’s where the analogy takes on relevance. Redstarts, having only bird brains, devote all their “regulatory” capacity to identifying cuckoo eggs. They pay no attention to the hatchlings, which within a few weeks have become three times the size of their adoptive mothers. Mama redstart, having allocated none of her bird brain to hatchling identification, feeds these enormous cuckoos as her own. She simply cannot see the hatchling for the egg. Her success at imposter egg identification creates an an illusion of adaptive competence.

Lest we laugh too hard at the redstart’s hartchling blindness, didn’t financial regulators act similarly when they failed to notice the growing bundles of toxic subprime mortgages during the run-up to the financial crisis? Or when they fail to take decisive action when they observe that firm’s had not adjusted their business models amidst fast rising interest rate risk, as we see today? Financial system cuckoos take many forms. They can grow within the ‘dark matter’ of an organization: the norms, dispositions, and cultures that guide behaviors.19 They can arise within technologically interconnected networks. They can take the form of new financial instruments. And they can emerge as banks grow in size, complexity, and opacity without commensurate managerial capacity.20

Unlike black swans, which swoop in and land in full plumage, these cuckoos often grow right before our eyes. We do not see them because we have devoted our attention to the egg.

THE EGGSHELL PARADOX AS AN ATTENTION PROBLEM

The illusion of adaptive competence reflects an attention problem. Regulatory agencies, like all organizations, possess limited attentional bandwidth and must do their best to allocate it wisely. Given those limits, in striving for the optimal allocation of attention (and inattention), errors are inevitable. Attention constraints produce behaviors that resemble behavioral biases, including failure to account for base rates and sample sizes, for instance.21

If a regulator assumes stationarity, then it can construct an ‘attention portfolio’ to balance risks against costs, much like one builds an investment portfolio that weighs risks and rewards.22 Optimal portfolios devote more attention to high value, high variance dimensions.

In other words, they allocate their limited attention to the historically relevant sources of risk. Those dimensions will also be the most easily measured. More opaque sources of risk will be left unaddressed, often knowingly. The sociologist Daniel Yankelovich referred to this as the McNamara Fallacy: “presume that what can’t be measured easily really isn’t important.”

Fixed attention invites failure.

Regulators should not assume stationarity. They are operating in a complex system, not an equilibrium system. In pursuit of market share and profits, financial firms adapt. They develop new internal incentive structures, new financial instruments, and reorganize the financial networks in which they operate. As a result, new dimensions of risk emerge. Regulators must be prepared, therefore, to expand the regulatory perimeter as circumstances demand. Fixed attention invites failure: when the risks of algorithmic trading were ignored, flash crashes followed.

THIS TIME IS DIFFERENT

In a complex system, learning must be proactive. Alas, all too often, it is reactive. “Regulatory standards for SVB [Silicon Valley Bank] were too low,” wrote Michael Barr, US Federal Reserve Bank Vice-Chair for Supervision, in a letter that accompanied his office’s recent investigation into the causes of the bank’s collapse in March this year.23 “The supervision of SVB did not work with sufficient force and urgency, and contagion from the firm’s failure posed systemic consequences not contemplated by the Federal Reserve’s tailoring framework,” Barr continued.

The discussion of extending capital cushions heard in the wake of recent bank failures serves as a good example of how selective attention creates the eggshell paradox. This strategy merely extends upon existing strategies that operate along the same dimensions of risk that were found to have been central to the financial crisis. Increasing the size of capital cushions, and extending the range of firms that are required to maintain them, will only go so far. No capital cushion — of any size — can withstand a bank run triggered by a loss of faith in the institution or the capabilities of its leadership.

The risk management and supervision practice that could be called “narrowly we roll along” must change. This time really is different. Regulators never step in the same river twice. They must therefore diligently seek to identify new dimensions of risk and new data of relevance. They must adopt new risk metrics and develop new models.

Even with a many-model mindset, regulatory failures will occur: a new source of risk may remain opaque until too late. Fair enough. But a strategy of ever more elaborate stress tests, applied to a fixed set of risk dimensions, would be far worse. So long as regulators make no attempts to identify emerging risks within the “dark matter” residing inside firms and markets, they allow those risks to grow.

Regulator’s tests will continue to certify firms and systems as financially sound, while baby cuckoos fill the nest.

References
  1. Page, Scott (2018) The Model Thinker. Basic Books, New York.
  2. Robustness means the maintenance of system functionality. Stability means returning to the same configuration.
  3. Ashby W Ross (1956) An Introduction to Cybernetics. Chapman & Hall, London.
  4. Kay, John and Mervyn King (2020) Radical Uncertainty, Bridge Street Press, London, UK.
  5. Trichet, Jean-Claude (2010) “Reflections on the Nature of Monetary Policy Non-Standard Measures and Finance Theory,” ECB Central Banking Conference, Frankfurt, Gemany. [LINK]
  6. Galla, Tobias and J.Doyne Farmer. (2013) ``Complex Dynamics in Learning Complicated Games,” Proceedings of the National Academy of Sciences 110:1232-1236.
  7. Gell-Mann, Murray (1994) The Quark and The Jaguar: Adventures in the Simple and the Complex. W.H. Freeman, New York, NY.
  8. Krakauer, David, “What is Complex System Science,” Santa Fe Institute. [LINK]
  9. Examples of complex systems include civilizations, cities, the nervous system, the Internet, and ecosystems of all stripes.
  10. Galesic, Mirta, et al. (2023) ``Beyond Collective Intelligence: Collective Adaptation.’’ Journal of the Royal Society Interface. 2020220736
  11. Complex systems have long been understood as creating difficulties for regulators. Designing for robustness by adding redundancies often backfires by increasing complexity, incentivizing shirking, and instilling a confidence that encourages risky behavior (Perrow 1999).
  12. Perrow, Charles (1999) Normal Accidents: Living with High Risk Technologies, Princeton University Press, Princeton, NJ.
  13. In the financial sector some actors, namely government and firms, can themselves be viewed as complex adaptive systems nested and operating within a larger complex adaptive system.
  14. Axelrod, Robert, and Michel Cohen (2000). Harnessing Complexity: Organizational Implications of a Scientific Frontier. Free Press. New York, NY.
  15. Breiman, Leo (2001) Random Forests. Machine Learning 45: 5–32.
  16. Haldane, Andrew and Robert May (2011) ``Systemic risk in banking ecosystems’’. Nature 469, 351–355.
  17. Tump AN, M. Wolf, P. Romanczuk P, and RHJM Kurvers (2022) ``Avoiding Costly Mistakes in Groups: The Evolution of Error Management in Collective Decision Making’’. PLoS Computational Biology
  18. Pendleton-Julian, Ann and John Seely Brown (2018) Design Unbound: Designing for Emergence in a White Water World, Volume 1: Designing for Emergence (Infrastructures), The MIT Press, Cambridge, MA.
  19. Mulgan, Geoff (2021) ``Loops for Wisdom: how to bridge the wisdom gaps in the life of citizens, governments and societies’’, Demos Helsinki.
  20. Kress, Jeremy C. (2019) ``Solving Banking’s ‘Too Big to Manage’ Problem” 104 Minnesota Law Review 171 (2019), Available at SSRN: [LINK]
  21. Gabaix X. (2019) “Behavioral Inattention.’’ In: Handbook of Behavioral Economics, edited by D Bernheim, S DellaVigna and D Laibson. Vol. 2. Elsevier ; pp. 261-343.
  22. Sims, C. A. (2003). Implications of rational inattention. Journal of Monetary Economics, 50(3), 665–690.
  23. Barr, Michael, “Review of the Federal Reserve’s Supervision and Regulation of Silicon Valley,” Board of Governors of the Federal Reserve System, April 28, 2023. Bank,” [LINK]

 

Join The Discussion

See something that doesn't look quite right?

We strive to provide high quality and accurate content at all times. With that said, we realize that sometimes links break, new information becomes available, or there is something that you feel we may have missed.

If you see something that you think we should be aware of, we would love to hear from you. Feel free to drop us a note below and leave your name and contact info if you'd like to hear back from us.

Thank you for being a key part of the Starling Insights community!