Banks are scrambling to meet with IFRS 9 guidelines and are setting down on the path to implement various ECL estimation methodologies and models. But a topic that hasn't been given enough attention is the need for governance of these models and the attendant model risk management framework that needs to be set up to lend credibility to the model estimates. IFRS 9 is the new accounting standard for recognition and measurement of financial instruments that will replace IAS 39. Several banks are planning to perform parallel run by Q1 2017; however, in a lot of cases, model governance finds only a cursory mention in the roadmap adopted by the banks. This blog touches upon the need for validation of models and how model risk governance has become paramount in view of the new guidelines.
The need for a robust Model Risk Management Framework?
Our earlier blogs touched upon how Basel models can be leveraged to some extent in a Bank's IFRS 9 efforts, albeit with significant add-ons and enhancements. In contrast with Basel-II rules, which call for the use of through-the-cyclen (TTC) probabilities of default (PDs) and downturn (DT) loss-given-default rates (LGDs) and exposures at default (EADs), the new IFRS 9 requires entities to use point in-time (PIT) projections to calculate the lifetime expected credit loss (ECL). By accounting for the current state of the credit cycle, PIT measures track closely the variations in default and loss rates over time. Entities are required to recognize an allowance for either 12-month or lifetime ECLs, depending on whether there has been a significant increase in credit risk since initial recognition (Stage 2 and Stage 3 require lifetime ECL computation).In past publications, Aptivaa has explained the concepts of lifetime expected loss and its components ( Demystifying PD Terminologies, Impairment Modelling).
The requirements of lifetime expected loss calculations under IFRS 9 will require a new suite of IFRS 9 models as separate from Basel IRB models. Such a suite of models would require validation under a robust Model Risk Management and Governance Framework. The associated processes need to be in place around the application of expert judgment. BCBS in its consultative document Guidance on accounting for expected credit loss highlights the importance of an independent validation with clear roles and responsibility to effectively validate the model inputs, design and output.
Banks should establish an overarching governance framework over the model validation process, including the appropriate organizational structures and control mechanisms, to ensure that the models are able to continue to generate accurate, consistent and predictive estimates - Guidance on accounting for expected credit loss, BCBS
Under the current Basel framework, there is a lack of a formal model governance structure under a robust risk management framework that looks into basic practical challenges in model risk management. There is a need for sound practices for model governance, including policies and controls, model development, implementation and uses, and also for model validation. A typical model risk management framework covers the following components.
In 2011, the US Federal Reserve led the way by issuing ?SR 11-7 Guidance (Supervisory Guidance on Model Risk Management' - April 4, 2011), with several of its principles readily adaptable for IFRS 9 model risk governance. While the full scope of the model risk management is well beyond the scope of this blog, this blog aims to provide some practical aspects to Model Risk management.
Principle 5 of Basel's 'Guidance on accounting for expected credit loss' seems very much aligned to SR 11-7 text, and one can clearly see the similarities with respect to the scope of the validation exercise.
However, SR11-07 goes a little beyond and provides some more practical guidance on the range of validation activities that needs to be covered under a validation framework.
The following sections elaborate on how these guidance/principles can be interpreted.
Establish a model inventory
SR11-7 states the term model refers to a quantitative method, system, or approach that applies statistical, economic, financial, or mathematical theories, techniques, and assumptions to process input data into quantitative estimates. The definition of model also covers quantitative approaches whose inputs are partially or wholly qualitative or based on expert judgment, provided that the output is quantitative in nature.
This is a useful definition to use with respect to identifying IFRS 9 models as well. The complexity of expected lifetime expected loss calculations is much more than in the extant Basel environment - there would be outputs from nonparametric models that will used for the computation of ECL. Also, a substantial increase in the number of models can be envisaged in the IFRS 9 world. It wouldn't be surprising to find many models being spreadsheet based and being used by just a handful of users. But the important aspect is to firstly identify the models, then apply validation routines proportionate to the scope and materiality of the specific model. Depending on the models' lifecycle (e.g., post development, implementation), and model's materiality, the depth of the model validation and review can vary. Banks and financial institutions should adopt a framework which should be fully transparent, with full audit-ability of model definitions and model inventory, to monitor model risk and maintain transparency.
Establish model materiality and requisite validation requirement
The nature and scope of the validation should depend on the complexity and the materiality of the portfolios and the models being used. There should be distinct and transparent model materiality classification framework for models. This could be simple asset size based thresholds, RWA based guidelines or based on model purpose. The nature and scope of validation will then be dependent on the materiality of the models in question.
SR 11-7 says where models and model output have a material impact on business decisions, including decisions related to risk management and capital and liquidity planning, .. a bank's model risk management framework should be more extensive and rigorous?.
An illustrative possible model materiality and the corresponding Validation requirements are provided below.
Establish governance process for all IFRS 9 models
Shown below is a typical validation process for effective model risk management. The process and its rigor would change based on model materiality. However, note that while an annual independent validation is required, there should be a validation and review before the model goes into implementation. To meet tight deadlines, it is important that the model development teams get sign offs from validation teams at intermediate stages and a strong process is in place for such interactions between the development and the validation teams. For instance, right at the model development initiation, the terms of reference could state at a high level the purpose of the model and the methodology options that would be explored in the exercise. For example, the terms of reference can state that for PD term structure computation, it would explore the binomial method and the reasons for the same. Similarly, at the stage of data preparation, another sign off could be obtained from the validation team, so that validation team can flag off issues at an early stage, be it with data quality or with completeness of data. Shown here is a typical validation cycle that can be adopted by financial institutions.
Identify statistical tests and key areas of emphasis for each model
Based on the impact of IFRS 9 on the components of expected credit loss estimation, the below table summarizes the typical methods for validation, and their relevance across critical review parameters:
Identify the validation and performance statistics for various models
Shown below are some industry practices around the various performance statistics used to gauge model performance.
Performance measures for Probability of Default models
Under a PIT PD approach, PDs are estimated taking all available cyclical and non-cyclical, systematic and obligor specific information into account. Industry-specific factors and macroeconomic indicators need to be utilized to increase the forward-looking predictive power of the PDs in order to be more PIT. In such a scenario, it would require frequent re-rating of obligors to capture the changes in their PDs due to all relevant factors including cyclical ones. Validation under such a scenario could be based on an early warning trigger framework which is also forward-looking in nature. Over time, the bank needs to monitor if the obligors risk rating is being upgraded or downgraded effectively enough to capture their PIT PDs. Such a monitoring can be done using the risk-rating migration rates and there are various migration/mobility measures to quantify the degree of such migration. The more PIT the PDs, a higher migration of ratings would be observed due to movement between business cycles. A pure PIT approach, however, would be an ambitious effort and a hybrid-approach between TTC and PIT PDs would be more practical to implement.
Performance measures for Loss Given Default models
Under the current Basel framework, banks are required to calculate downturn estimates of Loss give default. Such downturn estimates help stabilize RWA by making it less susceptible to changes in the underlying credit cycle. Under the IFRS 9 framework, banks are required to calculate best estimate measures based on current risk, which, in other words implies calculating point in time estimates. Such PIT LGD estimate accounts for all relevant information including the current state of the credit cycle as well as specified macroeconomic or credit-factor scenarios in the future. Also, under IFRS 9, discounting of historical recovery cash flows is done based on the effective interest rate compared to the current practice of contractual. Under such a scenario, LGD estimates can be validated to check the following:
Amongst all the methods for computation of LGD estimates, Workout LGD is the most widely used method to build LGD models as pointed out in an earlier publication by Aptivaa, (?Cash Shortfall & LGD ? Two Sides of the Same Coin). Before a detailed validation strategy can be framed, it is important to be consistent in the definition of loss and default (depending on the portfolio and product type). Below are some methodologies based on which validation review can be performed.
1. Scatter Plots: Scatter plots can be useful to examine the relationship between the expected and observed losses. Such plots can reveal anomalies such as extreme values (indicating validation base clean-up issues) and also how the estimated and observed values move together. Greater concentration along the diagonal shows accuracy and deviations observed along the axis can be a cause for concern requiring review of LGD model parameters. A scatter plot is an example of a summary plot which is used as a ?pulse check? to recognize any inherent problem at a glance.
2. Confusion Matrix: Confusion matrices are designed to look at all the combinations of actual and expectedclassifications within each LGD bucket. This could bebased on count, EAD or observed loss basis. Inpractice a common LGD scale typically ranges from 0%to 100% with not more than ten risk grades. In this blog,all expected and observed LGDs are discretized into sixbucket ratings from LR1 to LR6.
Such a table gives an idea of how the observed lossesare classified by the model predicted LGD. Confusionmatrices can be summarized using measures such as ?Percentage match? and ?Mean absolute deviations? toarrive at one figure based on which performance can be evaluated against internal/industry benchmarks. Theadvantage of using a measure like ?Mean absolute deviation? is that it captures the magnitude of the deviation ofthe actual and predicted numbers. In mathematical terms, the mean absolute deviation can be written as:
3. Expected cash shortfall: Expected cash shortfall can be defined as the difference between the total losses expected against the observed. The difference is expressed as a percentage of total observed loss for comparison between different portfolios.
To understand the expected cash shortfall, we look at the sample confusion matrix by observed loss above. In the table, the figure US$ 7,267,809 is derived by multiplying the observed LGDs by EADs. However, if we use the expected LGDs, then this figure becomes US$ 54,783,324 which shows that the LGD model has a large expected cash shortfall of -653% which implies significant over-prediction. Expected cash shortfall method thus gives an idea of the extent of conservatism or underestimation in the LGD model and this should be validated against established benchmarks.
4. Loss capture Ratio: The ?loss capture ratio? gives a measure of the rank-ordering capability of LGD models on the basis of how well they capture the portfolio's final observed loss amount. The loss capture ratio is derived from the ?loss capture curve? which is defined as the cumulative observed loss amount captured while traversing from highest expected LGD to lowest.
To plot the loss capture curve, transactions are first sorted by the LGD model's raw LGD values between 0 and 1 from highest LGD to lowest LGD. The cumulative loss captured percentage is then calculated from left to right (highest expected LGD to lowest) by accumulating the observed loss amount (EAD times observed LGD) over the portfolio's total observed loss. The loss capture ratio is defined as the ratio of the area between the model loss capture curve and the random loss capture curve (45 degree line representing a complete random model) to the area between the ideal loss capture curve and the random loss capture curve. Similar to the accuracy ratio, it is a measure of how close the model is to a perfect model which is able to estimated losses with 100% accuracy.
5. Correlation Analysis: The model validation report for LGD should provide a correlation analysis of the estimated LGD with the actual LGD. This correlation analysis is an important measure for a model's usefulness. Correlationbased metrics quantify the degree of some statistical relationship between predicted and observed values.
Performance measures for Exposure at Default models
Similar to LGD, the EAD models can be validated using scatter plots, and confusion matrices. Most of the backtesting for EADs are done at a product or at an industry level.
Scatter plots can be useful for examining the relationship between the expected and observed EADs. Such plots can reveal anomalies such as extreme values (indicating validation base clean-up issues) and also how the estimated and observed values move together. As mentioned in our earlier EAD blog, there are some peculiarities with respect to EAD modelling such as treatment of outliers, which could potentially lead to negative EADs being predicted, or the EADs appearing above the granted limit amounts i.e greater than 100%.
Similar to LGDs, confusion matrices can be used for EADs as well, by CCFs and LEQ's intro grades and performing a notching analysis on the basis of these grades. Some models link borrower characteristics to EADs using regression methods, in which case standard regression statistics are tested.
Performance measures for Macroeconomic Models
Conventional methods of macroeconomic forecasts are based on estimated parameter values and intercept terms are used to produce the first-cut forecasts of relevant endogenous factors. These are then adjusted based on subjective/exogenous factors based on available evidence and consensus judgment. Such exogenous factors are based on speculation in the market and global uncertainty. These initial forecasts are based on time-series (ARIMA models, exponential smoothening techniques, etc.) and regression analysis or an ensemble approach. Validation of such macro-economic forecasts can be done based on forecast accuracy based on performance measures derived from forecasting errors. Some of the commonly used measures are:
Measures of forecasting error for macro-economic forecasting
1. MAPE: The MAPE (Mean Absolute Percent Error) measures the size of the error in percentage terms. It is calculated as the average of the unsigned percentage error, as shown in the example below. MAPE gives a measure in % terms which makes it easy to understand. It should be noted that MAPE is scale sensitive and can take extreme values when the actual volumes are low.
2. MAD: The MAD (Mean Absolute Deviation) measures the size of the error in units. It is calculated as the average of the unsigned errors, as shown in the example below. The MAD is a good statistic to use when analyzing the error for a single item. However, if you aggregate MADs over multiple items you need to be careful about high-volume products dominating the results.
Also, validation for the macro-economic factors may include a review of the correlation between macro-economic indicators and historical losses. Based on an evaluation of such correlation trends, only those macro economic factors should be kept which have the closest association with historical losses.
The activity of Model Validation will play an increasingly important role under IFRS 9 with respect to identification of model risk stemming from data, methods, assumptions, calibration, documentation, implementation, usage and governance. The estimation of lifetime expected loss itself is an output of many moving parts working together in a complex macro-economic driven and volatile environment. Modeling for ECL estimation would lead to a significant increase in the complexity and number of underlying models for capital and expected credit loss estimation. Through effective validation, it is important to identify and highlight any model misspecifications or improper use of model outputs so that timely action can be taken to avoid business impact. Validation under an effective model risk management framework would be of prime importance for implementation as per IFRS 9 guidelines.
Don't miss this roundup of our newest and most distinctive insights
Subscribe to our insights to get them delivered directly to your inbox