Using Machine Learning To Calculate Regulatory Capital: When, Not If

European banks managementA new breed of bank managers has steered business models away from risky business

Scope Ratings (Sam Theodore & Keith Mullin)| Regulation is one of the two main drivers of change for the banking industry (the other driver being technology). Supervisory adoption of ML for the Internal Capital Adequacy Assessment Process (ICAAP) may in turn change the way the industry and the markets assess bank credit risk.

More precisely, the future use by regulators of ML for ICAAP will give a powerful impetus to its adoption by market participants for their own assessment of bank credit risk. Especially when the accuracy and clarity of ML models become more evident.

Traditional fundamental analysis is not geared to incorporating and interpreting millions of data points without relying on ML systems (and perhaps artificial intelligence in the future). Many argue that such a degree of complexity is not necessary.

However, when ML-based analysis starts playing a more prominent role – spurred by the regulatory ICAAP process – this argument will be less credible, especially if the analytical outcomes of the two approaches diverge.

Many banks already use expert systems, some of them including ML, to support lending decisions, credit monitoring, or loan pricing. But they have not been accepted for the Internal Risk-Based (IRB) approach to calculating regulatory capital requirements.

The EBA notes that one reason why ML models have not been used in the IRB process is their complexity, which leads to difficulties in understanding and interpreting the results.

But the EBA also recognises that the cost of complexity brought about by ML, where the relationship between inputs and outputs is more difficult to assess and understand, will lead to an increase in predictive power. Which is a valid argument and, in my view, will remain so for the near future. In all fields, expert-led technology advances much faster than the capacity and willingness of non experts to adopt it.

Senior management in banks customarily adopts a more cautious perspective on change than what the new technology frontier might allow. Supervisors would be doubly cautious about the adoption of complex measurement and management models that are difficult to understand.

In its report, the EBA says institutions should “find an appropriate balance between model performance and explainability of the results”. But, again, the mere fact that a cautiously positive view on ML is showcased by a mainstream supervisor is a very important step.

One which in my opinion positions the EBA as one of the most forward-looking supervisory bodies globally.

ML systems can add value…

 Acknowledging that ML systems could play an important role in the way financial services are delivered in the future, the EBA’s report aims to identify challenges and benefits of using them for IRB models; and provide principle-based recommendations for banks for prudential purposes.

The EBA does see several benefits of using ML models. One is improving risk differentiation, by improving discriminatory power and by providing tools to identify all relevant risk drivers and interconnections.

Another is risk quantification, improving model predictive ability and detecting material biases. And more robust systems for validation and stress testing.

…But caution is needed

One challenge identified in the EBA report is the relationship between statistical models with human judgment. Indeed, the use of statistical models should be complemented by human judgment regarding risk differentiation. Banks do not rely blindly on statistical models; nor should they. But the inherent complexity of ML models may make human judgment more difficult to apply effectively if the higher level of complexity is not properly interpreted.

Another challenge lies in the fact that there may be a lack of data to feed Big Data models (which ML uses). Existing regulation (Article 180 of the Capital Requirements Regulation) says data must go back at least five years. In the case of natural persons in the EU, this hurdle could be amplified by the data retention rules related to the General Data Protection Regulation (GDPR).

One key component in the IRB is model and data validation. But a more complex model, like one based on ML, can be more difficult to challenge efficiently. The EBA report gives the example of hyper-parameters, which may require specific statistical knowledge that is not easily available in all institutions.

The EBA refers several times to interpretability of results as being a key hurdle for the adoption of ML models by supervisors and banks’ senior managers and boards: “a good level of institutional understanding about their IRB models is a key element, and with even more relevance when ML models are used for regulatory purposes”.

The EBA suggests simple techniques to address it, such as the use of graphical tools to show the impact of various variables.

One specific threat identified is over-fitting: performance optimisation of the model’s development sample, whose high performance might not be confirmed on other current or future portfolio samples.

Caution is also raised with respect to implementation integrity (made more difficult as complexity rises), or the accuracy of some banks’ vetting processes when it comes to Big Data.

About the Author

The Corner
The Corner has a team of on-the-ground reporters in capital cities ranging from New York to Beijing. Their stories are edited by the teams at the Spanish magazine Consejeros (for members of companies’ boards of directors) and at the stock market news site Consenso Del Mercado (market consensus). They have worked in economics and communication for over 25 years.