Austen Saunders and Matthew Willison
Banks come in different shapes and sizes. Do prudential regulations that work well for big banks work as well for small ones? To help us find out, we measure the effectiveness of some key regulatory ratios as predictors of bank failure. We do so using ‘receiver operating characteristic’ – or ‘ROC’ – analysis of simple threshold rules. When we do this, we find that we can use the ratios we test to make better predictions for large banks than for small ones. This provides evidence that an efficient set of regulations for large banks might not be as efficient for small ones.
One size fits all?
Prudential regulations were overhauled after the Global Financial Crisis of 2007–08 for the obvious reason that regulators didn’t want the same thing to happen again. This overhaul included a package of new international banking standards known as ‘Basel III’.
To make sure banks hold more capital when they take on more risk, Basel III made changes to how risk weighted capital ratios are calculated. It also added a leverage ratio which limits how big banks’ balance sheets can get without them raising more capital, regardless of how risky their assets are. Finally, it introduced the Net Stable Funding Ratio (NSFR) which requires banks to fund assets that they expect to hold for a long time with liabilities which aren’t likely to be withdrawn at short notice.
The Basel III standards must be applied to internationally active banks, which tend to be large. Regulators have a choice about whether to also apply them to non-internationally active banks, which are often (but not always) smaller.
Making banks of all sizes abide by Basel standards would make sense if applying them to small banks realized the same benefits as applying them to large banks. But what if that were not the case? Regulators would then need to think about whether a different set of regulatory requirements would be better for small banks.
Measuring the benefits of regulatory requirements
One way of measuring the benefits of regulatory requirements is to test how closely associated they are with past instances of bank failure.
In a recent Staff Working Paper, we apply this approach to 118 UK banks and building societies using regulatory data they submitted in 2007. We test whether we can use their risk weighted capital ratios, leverage ratios, and NSFRs to predict which of them got into trouble during the subsequent crisis (because the NSFR didn’t exist in 2007 we estimate a proxy for it using other balance sheet information for each bank). We measure which banks struggled most during the crisis using scores that supervisors allocated to record their judgements about how close each bank was to failing. We classify banks that received the worst possible score at any time between July 2007 and December 2008 as ‘distressed’.
The methodology we use to make predictions is borrowed from another recent Staff Working Paper. For each of the three regulatory ratios we test, we set a threshold. We predict that any bank that failed to meet at least one of those thresholds in 2007 would have gone on to become distressed. For example, if we set a leverage ratio threshold of 3%, a minimum risk weighted capital ratio of 10%, and an NSFR threshold of 100%, we predict that a bank which fell below any one of those thresholds in 2007 would have become distressed. We then see how many correct predictions (the ‘hit rate’) and how many incorrect predictions (the ‘false alarm rate’) our thresholds generate.
What we do next is find the hit rates and false alarm rates produced by many combinations of thresholds (in fact, we test all 1.6 million possible combinations of the values reported for each ratio by the 118 banks in our sample). This allows us to identify optimal sets of thresholds that produce the lowest false alarm rate for each hit rate. There is a finite number of hit rates because there is a discrete number of banks in our sample, and there is one minimum achievable false alarm rate for each hit rate. Sometimes the best result for different hit rates is the same (ie the lowest achievable false alarm rate is identical for several hit rates). In that case, the set of thresholds that produces the highest hit rate is the optimal one.
By plotting the optimal combinations of hit rates and false alarm rates, we can draw a ‘receiver operating characteristic’ or ‘ROC’ curve. A ROC curve shows the trade-off between the hit rate (on the vertical axis) and the minimum false alarm rate (on the horizontal axis) when the thresholds are varied. If we could make perfect predictions, then the curve in Chart 1 would touch the top left corner because we would be able to achieve a 100% hit rate with a 0% false alarm rate. When a ROC curve touches the 45-degree line, predictions are no more accurate than tossing a coin.
Chart 1: Hit rate and false alarm rates when we apply minimum thresholds for the risk weighted capital ratio, leverage ratio and NSFR to all banks in our sample
Chart 1 shows that predictions made for our sample of 118 banks are reasonably good so long as moderately high hit rates are acceptable. It shows that we can get a 50% hit rate with just a 12.5% false alarm rate. But we think that bank supervisors have a low tolerance for missing cases of distress, and that they therefore want predictions with a high hit rate. Chart 1 shows that this can be difficult. When hit rates are over 75%, the ROC curve is very close to the 45-degree line because false alarm rates are almost as high as hit rates.
However when we split our sample by size, we find something interesting. When we make separate predictions for ‘large’ banks with assets over £5 billion, and for ‘small’ banks with assets below £5 billion, we get much better results for the large banks. Chart 2 shows that even when hit rates are above 75% we can make predictions with just a 25% false alarm rate for large banks. Comparable results are much worse for small banks, with false alarm rates for these banks being over 75%.
Chart 2: Results when we make separate predictions for large and small banks
Measure for measure
Why are predictions better for large banks than for small banks?
The differences in performance suggest that the regulatory ratios we test are better aligned with risks that cause large banks to fail than with those that cause small banks to fail.
Other factors that cause small banks to fail could include the quality of banks’ governance, as poor strategic decisions and weak oversight create risks which crystallise during periods of stress. We can’t directly observe the quality of banks’ governance in the past, but we do have access to scores supervisors allocated to record their judgements about the quality of each banks’ governance in 2007. When we add thresholds for governance scores to the thresholds for our three regulatory ratios, we find that we can make modest improvements to our predictions for small banks (reducing false alarm rates by about ten percentage points when hit rates are above 75%), but not for large banks.
Applying requirements efficiently
The regulatory ratios we test have all been introduced or redesigned since 2007–08. While these regulatory standards must be applied to internationally active banks, policymakers can choose whether to apply them to smaller, domestically active banks.
Prudential regulations make bank failures and financial crises less likely, but they also impose costs on banks that have to hold extra resources and build systems to monitor compliance. Therefore, as a general principle, regulations should be applied in a way that achieves the best balance between their benefits and their costs.
Our findings show that ratios that would have worked well as predictors of distress for large banks (for which they were designed) would have been less effective predictors for small banks. They also provide some evidence that other factors (in this case governance) can improve predictions for small banks but not for large ones. This suggests that the benefits of applying different regulatory requirements vary according to the size of the banks to which they are applied. Our findings therefore support the idea that when policymakers design regulations for small banks they should consider whether benefits might be more efficiently realised by applying requirements that differ from those applied to large banks. This might mean removing or varying requirements which work well for large banks but less well for small banks. When policymakers consider the benefits of applying specific requirements to small banks, they should consider how well they address the sorts of weaknesses that tend to cause small banks to fail.
Austen Saunders and Matthew Willison work in the Bank’s Policy Strategy and Implementation Division.
If you want to get in touch, please email us at firstname.lastname@example.org or leave a comment below.
Comments will only appear once approved by a moderator, and are only published where a full name is supplied. Bank Underground is a blog for Bank of England staff to share views that challenge – or support – prevailing policy orthodoxies. The views expressed here are those of the authors, and are not necessarily those of the Bank of England, or its policy committees.