Skip to main content

The insight of Frans Boshuizen, sector lead Financial Services at Amsterdam Data Collective (ADC), on behavioural modelling was the base for the ADC roundtable on October 22, 2020. Experts from ABN AMRO Bank, ING, KNAB, NN Bank, Rabobank, NIBC, Triodos, Vivat and de Volksbank attended the roundtable, which was held online for the first time because of the Covid-19 restrictions.

The objective of the roundtable was to exchange insights around the application of behavioural models and how their performance might be influenced by the current economic- and Covid-19 dynamics. This was done by several presentations from experts within the industry, followed by a group discussion.

During the roundtable, several presentations on the topic of behavioural modelling were given. For example, Rabobank gave a presentation on behavioural modelling of retail mortgage loans, Knab presented insights on behavioural modelling of non maturing deposits (NMDs) and ADC shared an overview of current challenges of behavioural models. The presentations served as a base for dynamic discussions amongst the participants. In this article, the most relevant insights are shared.

Modelling Conditional Prepayment

Prepayments are an important risk for banks, for example for the return made on mortgage loans. If customers prepay on their mortgage loan, this leads to less interest payments over the complete loan period, which negatively impacts the returns. Therefore, retail banks could truly benefit if they manage to model this behaviour realistically.

Prepayment models are a combination of panel data and historic timeseries, which makes it relatively complicated to model. The timeseries data consists mainly of macro-economic factors, such as interest rates, mortgage spreads or house prices. On the other hand, we have mortgage specific information such as the redemption type, type of contract (savings mortgage versus annuity), the age of the mortgage or the mortgage rate in the contract.

Although mortgage loan datasets are usually very large, the data to model the behaviour of all these customers is unfortunately not that large. A reason for this is the fact that economic factors are usually only available on a quarterly basis and biased based on the period they have been observed. For example, decreasing interest rates had a different meaning before the credit crisis in 2008 than they have in the current low-rate environment. Estimating the CPR on a biased dataset potentially leads to large discrepancies between the forecasts and the actuals, which on itself will have impact on many aspects of banking.

Currently, many banks prefer to have simpler, but more robust CPR models. This means that usually only a few macro-economic factors are used to model the CPR. Besides, instead of modelling an individual CPR per loan, predefined cohorts are used. Altogether this leads to stable, more robust model output. Going forward, many banks indicated their interest in using more machine learning techniques for clustering, to create optimal cohorts to model on. However, since these clustering algorithms are less explainable and potentially lead to clusters that are not commonly used within the bank, more research and investigation will be required.

“Currently, many banks prefer to have simpler, but more robust CPR models.”

Modelling Non Maturing Deposits

Non maturing deposits (NMDs) such as current accounts and variable rates savings are an essential component of banking practices, and since clients are completely free to draw money as they wish, modelling behaviour is crucial. This modelled behaviour has major impact on the value and interest rate sensitivity of NMDs.

The option valuation approach in combination with a replication portfolio was discussed during the session. With this approach, the withdrawal behaviour and the price elasticity of the rate on the NMDs are replicated by a portfolio of fixed income instruments. The option valuation approach is used by most (larger) banks and is the standard in the market. However, the specific model details may differ. Other approaches which were popular in the mid-1990s when bank started to model NMD products with replicating portfolios (RP) are:

  1. Fixed profile: The RP has a fixed cash flow profile determined by expected outflow of the NMD product.
  2. Fixed investment rules: The cash flows of the RP are determined by a fixed investment rule, such that the new inflow and redeeming cash flows from the RP are invested in the RP. The goal is to stabilize the historic margin between the fund-transfer-price of the RP and the rate offered to the client.
  3. Hybrid approaches where the option pricing model is combined with the fixed investment rule approach, such that the RP determined by the fixed rule is a delta or dv01 hedge for the value of the NMD product.


Replicating the cashflows using a fixed profile or a fixed investment rule is relatively simple to implement and allows to use expert knowledge. Although implementation may be straightforward, the calibration of the investment rules is based on historic experience of the bank’s pricing policy in combination with experienced inflow and outflow of the modelled product. The experience is based only on a downward trend in savings rates and therefore the model might not take the market into account when interest rates rise. The expert judgment part related to this type of model can consist of calibrating the model on future plausible scenarios as well. The Asset-Liability Management (ALM) team must work closely with the pricing team to predict price and volume behaviour of customers in these scenarios.

An option valuation model consists of the following components:

  • An interest rate scenario generator that can project risk-neutral scenarios.
  • A volume model that can determine outflow for each point in each generated scenario.
  • A price model that can set the price on the modelled portfolio at each point in each generated scenario.

The value of the NMD product is determined by a Monte Carlo approach:

  • Discount the cash flows produced by the volume and price model along each scenario.
  • Take the average of each discounted value per path.

There remain challenges with the option value approach:

  • Can the dependence of future price setting and outflow be modelled accurately?
  • Does there exist a natural floor on the savings rate on NMD products? ·If a dv01 replicating portfolio is used, how to hedge convexity (which might even become larger when a floor on the savings rate) is introduced?

In the current negative interest rate the introduction of a floor in rates offered to customers is a crucial decision to contemplate. It makes the model more complex and introduces another embedded option in the modelling of NMD products, which will alter the convexity and exposure to market volatility drastically. The other challenge in the current rate environment is the interaction between current accounts and savings accounts. Since the rates offered on both products are close to zero, the products might have more similar behavioural characteristics compared to the past. How does this impact the replicating portfolios for both products? Should they perhaps be modelled together as one portfolio?

There is an overlap between the challenges that banks face in modelling NMDs and mortgage loans. For example, the low-rate environment has impact on the behaviour of clients that we have not seen before. This raises questions on when to calibrate the model looking forward or backward using the historical data. And although currently most banks prefer to use the less complex models since these are less impacted by the mentioned behavioural changes, models that are closer to reality (for example, introduction of the floor for NMDs) are investigated thoroughly. In the investigation the impact on ALM and the interest rate hedging strategy (and possibly on hedge accounting) must be examined as well.

Complexity of Behavioural Models

It is always a balancing act to choose between complexity and robustness of the model. A more complex model might be closer to reality, but it could become very difficult to understand the model dynamics. With a simpler model, it is much easier to understand the results. However, is it able to cover the real-world complexity?

One of the solutions to narrow down this question, is by using multiple models for the same purpose. In this way financial institutions can assess how more complex models perform compared to the simple models. The starting point is usually a basic model, which will serve as the benchmark for more complex models that might be developed going forward.

There are a couple of reasons to avoid complex models:

  • Overfitting: A group of variables might be applicable on the calibration set but become irrelevant in the future. The impact on the model’s performance in times of stress can be high, as the overfitting leads to a set of explanatory variables that are in fact not explaining the behaviour as it is.
  • Level 2 complexity in eco-systems: The outcome of the model can have an impact on the business strategy, which might have an impact on human behaviour, which can have an impact on the calibration of the model. This will create a feedback loop which is hard to model in a robust manner.

It was observed during the roundtable that, in general, the larger banks are using more complex models compared to the mid-sized banks. This was mainly due to the volume of data available at the larger banks compared to smaller banks. However, there was a clear consensus amongst the participants that more data does not lead to better predictions. It might even give a false sense of comfort, as a lot of behavioural information might not be relevant for prediction purposes. Hence, understanding and interpreting your data properly is also extremely important.

Overall, the conclusion was that it is important to create realistic models, but not at all costs. In other words, overcomplexity should be avoided. And although more data usually improves the quality of the models, this is not necessarily the case with behaviour models.

Measuring the Performance of Behavioural Models

Is there a way to build a scoring model that gives an indication of the model’s performance? More specifically, is there a way to quantify the effect of a bad model? These were questions that were discussed during the roundtable. One of the suggestions was to develop metrics that quantify bad behaviour of models.

With some key models, such as the Conditional Prepayment Rate (CPR), rely heavily on other business processes. For example, the mortgage loan portfolio is hedged based on the projected cash flows. As the CPR has a significant impact on these projections, a bad prediction has large consequences. This emphasizes the importance of model performance once again.

To assess the performance, a top-down and bottom-up approach is recommended. The top-down approach compares the outcome with the predictions (back testing). The bottom-up approach is the sensitivity analysis and robustness check on the key assumptions and parameters. One way to do so is to shock the key parameters and aggregate the impact of the shock using an assumption for the correlation between the shock parameters. In this way, the impact on the key parameters can be measured in one single number.

“To assess the performance, a top-down and bottom-up approach is recommended.

Overall Conclusions

Behavioural modelling is a crucial aspect within banking, and it is at the heart of every Asset-Liability Management (ALM) model. With more data than ever available, it is generally expected that the performance of the models will improve. However, this is not necessarily the case in behavioural models as historical data might not be representative in current times. This potential misrepresentation of behaviour in more complex models is an important reason to work with more simple and robust models as well. These models can be used as a benchmark for the more complex models.

Although the focus of this round table was around modelling prepayments and savings, more models used in the financial industry are directly or indirectly impacted by behaviour. For example all credit risk models used for IFRS 9 and IRB models.

Going forward, there are several interesting questions that remain a challenge for financial institutions:

  • Have we seen major behavioural changes in crisis situations in the past and how did the simple and more complex models perform in these situations?
  • Do we foresee behavioural changes in the near future given the current economic environment with prevailing low rates?
  • And if so, how will this impact the models we use? ·How should the models and processes in ALM, hedging and pricing, be adjusted to incorporate these behavioural effects?
  • How can the model risk of (complex) behavioural models be assessed and measured?

Financial institutions will have to spend time and resources on finding the answers to these questions. This will eventually lead to a better understanding of both their clients and the market in general.

Would you like to know more?

If you have any questions or need assistance in assessing, reviewing of even remodelling your current behavioural models, please contact Frans Boshuizen of Amsterdam Data Collective, fboshuizen@adc-consulting.com