Risk and crises
Financial models are widely blamed for underestimating and thus mispricing risk prior to the crisis. This column analyses how the models failed and questions their prominent use in the post-crisis reform process. It argues that over-relying on market data and statistical forecasting models has the potential to further destabilise the financial system and increase systemic risk.
Statistical pricing and risk-forecasting models played a significant role in the build-up to the crisis. For example, they gave wrong signals, underestimated risk, and mispriced collateralised debt obligations. I am therefore surprised by the frequent proposals for increasing the use of such models in post-crisis reforms – and I am not alone (see for example). If the models performed so badly, why aren’t we questioning their increased prominence?
This may be because of the view that that we can, somehow identify the dynamics of financial markets during crisis by studying pre-crisis data. That we can get from the failure process in normal times to the failure process in crisis times. That all the pre-crisis models were missing was the presence of a crisis in the data sample.
This is not true. The models are not up to the task. While statistical risk and pricing models may do a good job when markets are calm, they lay the seeds for their own destruction – it is inevitable that such models be proven wrong. The riskometer is a
Models, momentum, and bubbles
The vast majority of risk models are based on the following assumptions;
Take a chunk of historical observations of the data under study.
Create a statistical model providing the best forecasts.
Validate the model out of the sample, but with historical data known to the data modeller.
This approach to modelling may be quite appropriate in the short run when there are no structural breaks in the data, so we can reasonably assume that data follows the same stochastic process during the entire sample period. Recent examples include the low-volatility periods of 1994-1997 and 2003-2007.
Even in such best-case scenarios, modelling is likely to deliver inferior forecasts. Data mining is rife; modellers tailor the model to the data in sample, resulting in the model performing well in the sample used for model validation but badly with new data.
The main problem, however, is that such modelling affects the behaviour of model participants. If market participants perceive risk as being low and returns high because that is what happened in the past, we get a positive feedback between continually increasing prices and decreasing risk.
This process is reinforced because of momentum effects induced by models. This was one of the main factors behind the asset price bubble before the recent crisis. Eventually this goes spectacularly wrong.
Models lay the seeds for their own destruction
Over time, market prices lose connection with fundamentals, and the bubble bursts. Prices collapse and perceived risk increases sharply. Statistically there is a structural break in stochastic processes governing prices, invalidating pre-crisis forecast models.
This process is reinforced by endogenous changes in the behaviour of market participants, e.g. because of external constraints. For example, margin requirements during times of financial turmoil can lead to a downward spiral in prices, induced by ever-increasing margins, as discussed by Brunnermeier and Pedersen (2008). Similarly, as noted by Danielsson et al. (2011), financial institutions are subject to capital requirements and thus may be caught out in an vicious feedback loop between falling asset prices, higher risk, and increasing demands for capital, leading to sales of risky assets, further exasperating difficulties and leading to more demands for capital and reduced risk taking.
Overestimating risk after a crisis: Why the banks aren’t lending
The risk forecast models provide an equally poor signal after a crisis has passed. Presumably, at that time investment opportunities are ample. However, backward-looking statistical risk forecast models still perceive risk as being high because observations from the crisis remain in the estimation sample for a long time after a crisis passes.
Since risk forecasts are a major input in the determination of bank capital (calculation of risk weighted assets), so long as risk is perceived as high, risk-taking by banks is curtailed, such as lending to high risk small and medium enterprises. This process is reinforced by current financial regulations, in particular Basel II and remains in the Basel III proposals.
It is somewhat disingenuous for political leaders to complain about lack of bank lending, when an important factor is the financial regulations passed by themselves.
Data fit for purpose
These issues are exasperated by the quality of market data since risk and pricing models are estimated with a recent sample of market data.
Market prices reflect the value of assets at any given time, but that does not mean they provide a good signal of the state of the economy or are a good input into forecast models. The reason is that market prices reflect the constraints facing market participants.
The presence of external constraints, such as margin requirements and capital, of the type discussed above, can cause prices to be driven to much lower levels than they otherwise would because of the feedback loop between the constraint, prices and risk.
In such scenarios, market prices reflect the constraints faced by market participants, so that high risk and low prices at times when constraints are binding are unlikely to persist when constraints bind less. Therefore, data in times of crises provides a poor guide to the future, even future crises because the nature of the constraints is likely to change. Similarly, pre-crisis data is not very informative about what happens in crises.
Models are least reliable when needed the most
The consequence of these issues is that the stochastic process governing market prices is a very different during times of stress compared to normal times. We need different models during crisis and non-crisis and need to be careful in drawing conclusions from non-crisis data about what happens in crises and vice versa.
This means that when we most desire reliable risk forecasts, i.e. during market turmoil or crises, the models are least reliable because we cannot get from the failure process during normal times to the failure process during crises. At that time the data sample is very small and the stochastic process different. Hence the models fail.
From a modelling point of view, this suggests that it may be questionable to use fat-tailed procedures, such as extreme value theory, to assess the risk during crises with pre-crisis data. Techniques such as Markov switching with state variables may provide a useful answer in the future. At the moment such models are few and far between.
Implications for policy
The problem of model reliability has been widely recognised following the crisis. Since then there have been very few, if any methodological improvements of note in practical risk forecasting techniques.
Given the widespread recognition of the lack of reliability in risk forecast models in the build-up to the crisis, their prominence in the post-crisis and regulatory reform process is surprising.
Many of the proposals for reform of bank bonuses call for more risk sensitivity. If the underlying risk models are unreliable and even subject to manipulation as is usually the case, basing regulations on compensation in financial institutions on risk forecast models might seem rather counterproductive.
Similarly the calculation of bank capital in Basel II and Basel III is based on risk models because the capital ratio is calculated by capital divided by risk weighted assets. For this reason, the introduction of the leverage ratio (with total assets) in Basel III is welcome.
Furthermore, many approaches to macroprudential regulations are based on systemic risk measurements. To the extent such endeavours rely on market data and statistical models, policy based on those measurements is likely to be flawed.
Crisis will not happen where people are looking
The challenge facing policymakers is even worse because they cannot look everywhere and will have to focus their attention on where they think systemic risk is most likely to arise. At the moment, a lot of attention is focused on the causes of the previous crisis, like the liquidity mismatches peculiar to the last decade.
However, the next crisis will not come from where we are looking. Just like last time where the danger of conduits and special investment vehicles caught everybody by surprise, so will the next crisis come from an area we are not looking at. After all, bankers seeking to assume risk will look for it where nobody else is looking.
Economic models have recognised the inherent challenges caused by intelligent agents reacting to model predictions ever since the pioneering work of Bob Lucas. Most practical models for price and risk forecasting by the industry and supervisors do not incorporate such features, reflecting the state of the art of macro models’ pre-rational expectations.
The presence of endogenous risk, and the resulting feedback effects between agent behaviour and model predictions, coupled with the low information content in market prices when agents are subject to external constraints undermine the reliability of most risk models. Because of the way they are constructed in practise, such models tend to be systematically wrong, over forecasting risk during crises and under forecasting risk at other times.
For these reasons, over-relying on market data and statistical forecasting models has the potential to further destabilise the financial system and increase systemic risk.
General Managing Director
M.N.Bank of Microcredits & Financial Services & Solutions Group