Credit has been fueling economic growth and activity across the globe since early 19th century (sources: Federal Reserve, Visual Capitalis). And the importance of credit has not changed in the last decade (source: EBA). At the same time, however, the scrutiny over the risk management of these models within financial institutions and external regulators has only increased.
Trying to provide financing to a vast number of consumers, SMEs and large corporates banks use statistical models to select clients and measure and assess their credit risk. The complexity of these models differs from institution to institution, but all institutions face a simple question: do their models still behave the way they were intended to behave? To monitor this at any given point in time, institutions may perform a set of statistical tests on their model outputs that backtests the performance of the model. Further, an institution may assess whether any changes have occurred in the environment in which the model operates which may include regulation, market conditions and business strategy. This leads to a second question: does an institution need to constantly monitor its models?
Periodic monitoring is a regulatory requirement
If your institution is supervised by an external party such as ECB, this latter question is answered by a very clear “yes”. Institutions need to keep an eye on their models as prescribed in CRR 185 (EBA PD/LGD paragraph 217-219, TRIM guide 3.6.56). The regulatory guidance tells the supervised entities to monitor their models at least annually. This monitoring is to ensure the models continue to be fit for purpose and use. Further, model monitoring creates awareness of the status of the model landscape and determines whether changes in regulations, market conditions, business strategy or underlying data demand adjustments to the models. Last but not the least, monitoring will ensure the continued performance of the model in line with what was expected during the model development phase.
Although the regulator requires each model to be monitored annually, in daily business an institution might benefit from more frequent reports. The ongoing pandemic highlights the need for fast and direct steering of the models within the landscape from both business and risk management perspective. With this in mind, institutions might even benefit from monthly reports on the status of their models.
Automation will bring efficiency and consitency in reporting
How, then, does an institution incorporate this monitoring exercise into its routines? At first thought, it seems to require a lot of investment both in employee hours and infrastructure. Is there a 1-800-MONITOR number that you can reach to? Probably not, but here at ADC we created a Model Monitoring Tool that will make things much easier for you and your institution. With automating the monitoring, we intend to reduce the time spent periodically on this task and increase the quality of the process with consistent reporting standards. The tool also provides an overview of all models and their past performance.
What is the ADC Model Monitoring Tool? It is a Python-based monitoring tool for the current financial risk models that enables your organisation to automate and improve the current monitoring process. ADC’s automated monitoring tool is suitable for testing performance of PD, LGD and EAD credit risk models and provides a good basis for reporting. It takes your monitoring data, performs statistical tests according to your internal standards and policies and creates a filled in monitoring report. All analyses are done within the dashboard, which provides a clear overview of the results. The monitoring reports can be generated with a single click. The tool can be easily extended with additional functionalities and allows for user-friendly scoping. And all of this is done without having to write a single line of code.
How can your organisation start using the tool? We implement the ADC Model Monitoring Tool together with your internal team in the agile way of working. We work in sprints where at the end of every sprint you have a working tool where we evaluate and collect user feedback and determine the focus areas for the following spring. In six sprints, 12 weeks, you and your organisation are ready to automate model monitoring tasks. For one of our clients, a leading financial institution, implementation of the tool reduced the number of FTEs allocated to monitoring from five to three.
Would you like to know more?
Please feel free to reach out to us for more information and a demo of the tool, please contact Frans Boshuizen of Amsterdam Data Collective, email@example.com