It’s an exciting time ahead for the new generation of credit model developers and validators.

While machine learning-based models can be diverse and multi-dimensional, this blog post will focus specifically on the use of these models in the world of Credit Risk.

Despite perceptions, as a technique, machine learning (ML) is neither new nor too complex. And though the speed of adoption of ML-based techniques differ by firms and industries, we are increasingly finding more and more application and use cases for this powerful analytical enabler. Arguably, slow progress has been made in the area of Credit Risk, primarily as we come out of a high transparency and stability focused Comprehensive Capital Analysis and Review (CCAR) era. One should note however that Credit Risk is pivotal to the acceptance of ML-based models as this is the area where the use of the technique is expected to be scrutinized more by the business and regulators.

Model developers are taking small but relevant steps towards gradually introducing ML-based models to mainstream credit decisioning. The advantages of ML-based models are well known and worth trying across multiple use cases. In the financial services industry, specifically in the Credit Risk area, we primarily see ML-based models used in Credit Acquisition and Line Management initiatives. The Generation (gen) 0 and Generation 1 ML-based models are required by regulators to be stable and explanatory in nature.1 Case in point are the fair lending considerations which mandate that banks’ credit decisions not be biased by factors such as gender, race, among others.2 Modelers therefore are not only trying to improve predictability but are also looking for causality and interpretation. For the model validator, although dimensions like conceptual soundness, variable selection and outcome analysis remain similar, there is an increased necessity to look at these dimensions through new lenses.

Key Validation Dimensions

To start with, rarely does one see self-learning Credit models in the industry today. By this I mean that we are still talking data lakes and not data stream in the world of credit decisioning. The primary driver for this is having factor stability in the decision process. The usual assumption is that the static lake is deep and wide enough to capture requisite trends. Validators do need to appreciate the cautious approach, however there is a strong case towards evaluating alternate model results and the robustness of the use case. The first question is therefore to assess if the model brings adequate performance improvement in its gen 0 and gen 1 form.

Validators should be aware that many practitioners still treat ensemble bagging and boosting techniques as complex black-box algorithm and just run the models using pre-built libraries. In the industry today, Random Forest is one of the most common bagging techniques and Gradient Boosting (GBM) is one of the most common boosting techniques. While both have demonstrated advantages, validators should strive to identify which techniques would be more effective in a use case. While a random forest method can help reduce variance and can handle overfitting relatively well, a GBM reduces bias and variance but tends to significantly overfit.

On the data assessment side, apart from the standard tests for Data Sufficiency, Accuracy, Validity, Relevance, Stability and Completeness, validators should insist on samples based on seeds from model developers. Replication of models remain critical to model validation even with ML-based models. Although many ML techniques are robust to missing values and outliers, the results might be skewed due to the presence of these data anomalies. Validation should assess the impact of these data anomalies even if the ML technique used allows for the presence of these.

Validators, akin to earlier practices should critically analyze the variable selection process, especially keeping in view fair lending practices for underwriting models. Additionally, adequate back-tests are advised to evaluate adverse impact in different sub-groups using reject interference data to avoid injecting bias into the model. Increased assessment on the inclusion of interaction variables is also important in terms of improving interpretability.

In terms of model testing and monitoring, re-estimation of model on different samples is key to maintaining stability of results (k-fold cross validation). In addition, Ensemble models should be monitored differently than traditional models. Specifically:

  • Accuracy, recall and coverage (among others) should be monitored to confirm if applicable to the use case
  • Stability of new performance data should be monitored after model re-fit (using new performance data)

Additional Considerations

Interpretability and transparency are also important to Credit models. Several contemporary approaches are available which can increase the transparency of these models, and thus allow users to interpret models for fair lending and accuracy. Some of these approaches consist of Individual conditional expectation (ICE) plots, Decision tree proxy models, K local interpretable model-agnostic explanations (K-LIME), Random forest feature importance, Leave-one-covariate-out (LOCO) feature importance. These approaches are extremely useful in the Credit model context and the understanding and applicability of these approaches is important to credit model validators.

While we have cursorily touched on a few key tips and tricks of the trade, over the next few months, we plan on continuing our series of blogs on some of the important topics that have been mentioned here. Would be great to hear from you, and if you have any specific topics/areas of interest and we can discuss, please feel free to comment and share. It’s an exciting time ahead for the new generation credit model developers and validators.

References:

  1. “Guidance on Model Risk Management,” Board of Governors of the Federal Reserve System, SR 11-7, April 4, 2011. Access at: https://www.federalreserve.gov/supervisionreg/srletters/sr1107.pdf.
  2. “Guidance on Model Risk Management,” Board of Governors of the Federal Reserve System, SR 11-7, April 4, 2011. Access at:        https://www.federalreserve.gov/supervisionreg/srletters/sr1107.pdf.
  3. “Supervisory Guidance on Model Risk Management,” Office of the Comptroller of the Currency, OCC Bulletin 2011-12, January 12, 2012. Access at: https://occ.gov/news-issuances/bulletins/2011/bulletin-2011-12.html.

Submit a Comment

Your email address will not be published. Required fields are marked *