4

Model risk management principles for banks

Principle 1 – Model identification and model risk classification

Firms should have an established definition of a model that sets the scope for MRM, a model inventory and a risk-based tiering approach to categorise models to help identify and manage model risk.

Principle 1.1 Model definition

A formal definition of a model sets the scope of an MRM framework and promotes consistency across business units and legal entities.

  1. a) Firms should adopt the following definition of a model as the basis for determining the scope of their MRM frameworks:
    A model is a quantitative method, system, or approach that applies statistical, economic, financial, or mathematical theories, techniques, and assumptions to process input data into output. The definition of a model includes input data that are quantitative and / or qualitative in nature or expert judgement-based, and output that are quantitative or qualitative.
  2. b) Notwithstanding the above definition, where material deterministic quantitative methods such as decision-based rules or algorithms that are not classified as a model have a material bearing on business decisions[5] and are complex in nature, firms should consider whether to apply the relevant aspects of the MRM framework to these methods.
  3. c) In general, the PRA expects the implementation and use of deterministic quantitative methods not classified as models to be subject to sound and clearly documented management controls.

Footnotes

  • 5. Business decisions should be understood here as all decisions made in relation to the general business and operational banking activities, strategic decisions, financial, risk, capital and liquidity measurement, reporting, and any other decisions relevant to the safety and soundness of firms.

Principle 1.2 Model inventory

A comprehensive model inventory should be maintained to enable firms to: identify the sources of model risk; provide the management information needed for reporting model risk; and help to identify model inter-dependencies.

  1. a) Firms should maintain a complete and accurate set of information relevant to manage model risk for all of the models that are implemented for use, under development for implementation, or decommissioned.[6]
  2. b) While each line of business or legal entity may maintain its own inventory, firms should maintain a firm-wide model inventory which would help to identify all direct and indirect model inter-dependencies in order to get a better understanding of aggregate model risk.
  3. c) The types of information the model inventory should capture include:
    1. (i) the purpose and use of a model. For example, the relevant product or portfolio, the intended use of the model with a comparison to its actual use, and the model operating boundaries[7] under which model performance is expected to be acceptable; 
    2. (ii) model assumptions and limitations. For example, risks not captured in model and limitations in the data used to calibrate the model;
    3. (iii) findings from validation. For example, indicators of whether models are functioning properly, the dates when those indicators were last updated, any outstanding remediation actions; and
    4. (iv) governance details. For example, the names of individuals responsible for validation, the dates when validation was last performed, and the frequency of future validation.

Footnotes

  • 6. The rationale for decommissioning a model could help inform or improve future generations of model development or improvements or the decommissioned model may be used as a challenger model itself.
  • 7. Operating boundaries is defined here as the sample data range (including empirical variance-covariance relationships in the multivariate case) used to measure of model performance per se, extrapolating beyond a model's ‘operating boundaries’ (such as macroeconomic indices in shock or stressed economic conditions) should be assumed to involve increased model risk. estimate the parameters of a statistical model. While not a measure of model performance per se, extrapolating beyond a model's ‘operating boundaries’ (such as macroeconomic indices in shock or stressed economic conditions) should be assumed to involve increased model risk.

Principle 1.3 Model tiering

Risk-based model tiering should be used to prioritise validation activities and other risk controls through the model lifecycle, and to identify and classify those models that pose most risk to a firm's business activities, and/or firm safety and soundness.

  1. a) Firms should implement a consistent, firm-wide model tiering approach that assigns a risk-based materiality and complexity rating to each of their models.
  2. b) Model materiality should consider both:
    1. (i) quantitative size-based measures. For example, exposure, book or market value, or number of customers to which a model applies; and
    2. (ii) qualitative factors relating to the purpose of the model and its relative importance to informing business decisions, and considering the potential impact upon the firm’s solvency and financial performance.
  3. c) The assessment of a model's complexity should consider the risk factors that impact a model’s inherent risk[8] within each component of the modelling process, eg the nature and quality of the input data, the choice of methodology (including assumptions), the requirements and integrity of implementation, and the frequency and/or extensiveness of use of the model. Where necessary (in particular with the use of newly advanced approaches or technologies), the complexity assessment may also consider risk factors related to:
    1. (i) the use of alternative and unstructured data,[9] and
    2. (ii) measures of a model's interpretability,[10] explainability,[11] transparency, and the potential for designer or data bias[12] to be present.
  4. d) The firm-wide model tiering approach should be subject to periodic validation, or other objective and critical review by an informed party to ensure the continued relevance and accuracy of model tiering. Validation or review should include: an assessment of the relevance and availability of the information used to determine model tiers; and the accurate recording and maintenance of model tiering in the model inventory.
  5. e) Individual model tier assignments (including materiality and complexity ratings) of models should be independently reassessed as part of the model validation and revalidation process, and should include a review of the accuracy and relevance of the information used to assign model tiers.

Footnotes

  • 8. Inherent risk is the risk in the absence of any management or mitigating actions to alter either the risk’s likelihood or impact.
  • 9. Data, usually unstructured and non-financial data, not traditionally used in financial modelling, including satellite imagery, telemetric or biometric data, and social-media feeds. These data are unstructured in the sense that they do not have a defined data model or pre-existing organisation.
  • 10. The ease or difficulty of predicting what a model will do, ie the degree to which the cause of a decision can be understood.
  • 11. Defined here as the degree to which the workings of a model can be understood in nontechnical terms.
  • 12. When elements of a dataset (or as a result of model design) are more heavily weighted and/or represented than others, producing results that could have ethical and/ or social implications.

Principle 2 – Governance

Firms should have strong governance oversight with a board that promotes an MRM culture from the top through setting clear model risk appetite. The board should approve the MRM policy and appoint an accountable individual to assume the responsibility to implement a sound MRM framework that will ensure effective MRM practices.

Principle 2.1 Board of directors’ responsibilities

The firm-wide MRM framework should be subject to leadership from the board of directors to ensure it is effectively designed.

  1. a) The board of directors should establish a comprehensive firm-wide MRM framework that is part of its broader risk management framework and proportionate to its size and business activities; the complexity of its models; and the nature and extent of use of models.
  2. b) The framework should be designed to promote an understanding of model risk, on both an individual model basis as well as in aggregate across the firm, and should promote the management of model risk as a risk discipline in its own right. The framework should clearly define roles and responsibilities in relation to model risk across business, risk and control functions.
  3. c) The board should set a model risk appetite that articulates the level and types of model risk the firm is willing to accept. The model risk appetite should be proportionate to the nature and type of models used. Firms’ model risk appetite should include measures for:
    1. (i) effectiveness of the design and operation of the MRM framework;
    2. (ii) identifying models and approving their use for decision making;
    3. (iii) limits on model use, exceptions and overall compliance;
    4. (iv) thresholds for acceptable model performance and tolerance for errors; and
    5. (v) effectiveness of use of model risk mitigants and oversight of the use of expert judgement.
  4. d) The board should receive regular reports on the firms’ model risk profile against its model risk appetite. Reports should include qualitative measures describing: the effectiveness of the control framework and model use; the significant model risks arising either from individual models or in aggregate; significant changes in model performance over time; and the extent of compliance with the MRM framework.
  5. e) The board is expected to provide challenge to the outputs of the most material models, and to understand: the capabilities and limitations of the models, the model operating boundaries under which model performance is expected be acceptable; the potential impact of poor model performance; and the mitigants in place should model performance deteriorate.

Principle 2.2 SMF accountability for model risk management framework

An accountable SMF should be empowered to have overall oversight to ensure the effectiveness of the MRM framework.

  1. a) Firms should identify a relevant SMF(s) most appropriate within the firm’s organisational structure and risk profile to assume overall responsibility for the MRM framework, its implementation, and the execution and maintenance of the framework. The relevant SMF(s) should be the most senior individual with the responsibility for the risks resulting from models operated by the firm. Firms should ensure the Statement of Responsibilities of the accountable SMF(s) reflects the specific accountability for overall MRM. 
  2. b) The accountable SMF(s)’s responsibilities regarding MRM may include:
    1. (i) establishing policies and procedures to operationalise the MRM framework and ensure compliance;
    2. (ii) assigning the roles and responsibilities of the framework;
    3. (iii) ensuring effective challenge;
    4. (iv) ensuring independent validation;
    5. (v) evaluating and reviewing model results and validation and internal audit reports;
    6. (vi) taking prompt remedial action when necessary to ensure the firm’s aggregate model risk remains within the board approved risk appetite; and
    7. (vii) ensuring sufficient resourcing, adequate systems, and infrastructure to ensure data and system integrity, and effective controls and testing of model outputs to support effective MRM practices.
  3. c) Consistent with other risk disciplines, the identification of a relevant SMF(s) with overall responsibility for MRM does not prejudice the respective responsibilities of business, risk and control functions in relation to the development and use of individual models. Model owners, model developers, and model users remain responsible for ensuring that individual models are developed, implemented, and used in accordance with the firm’s MRM framework, model risk appetite, and limitations of use.

Principle 2.3 Policies and procedures

Firms should have comprehensive policies and procedures that formalise the MRM framework and ensure effective and consistent application across the firm.

  1. a) Firms should have clearly documented policies and procedures that formalise the MRM framework, and support its effective implementation. Firm-wide policies should be approved by the board and reviewed on a regular basis to ensure their continued relevance for: the firms’ model risk appetite and profile; the economic and business environment and the regulatory landscape the firm operates in; and new and advancing technologies the firm is exposed to.
  2. b) Policies should cross-reference and align with other relevant parts of their broader risk management policies, and align with the expectations set out in this supervisory statement. Compliance with internal policies and applicable regulatory requirements / expectations should be assessed and reported to the board of directors on a regular basis.
  3. c) Firms should establish policies and procedures across all aspects of the model lifecycle to ensure that models are suitable for their proposed usage at the outset and on an ongoing basis and to enable model risks to be identified and addressed on a timely basis. At a minimum, the policies and procedures should cover:
    1. (i) the definitions of a model and model risk, and any external taxonomies used to support the model identification process;
    2. (ii) the model tiering approach, including: the data sources used in the model tiering approach; how model tiers are used to determine the scope, intensity and frequency of model validation; the roles and responsibilities for assessing model materiality and complexity; the frequency with which model tiering is re-performed; and the process for the approval of model tiering;
    3. (iii) standards for model development, including: model testing procedures; model selection criteria; documentation standards; model performance assessment criteria and thresholds; and supporting system controls;
    4. (iv) data quality management procedures, including: the rules and standards for data quality, accuracy and relevance; and the specific risk controls and criteria applicable to reflect the higher level of uncertainty associated with use of alternative or unstructured data, or information sources;
    5. (v) standards for model validation, including: clear roles and responsibilities; the validation procedures performed; how to determine prioritisation, scope, and frequency of re-validation; processes for effective challenge and monitoring of the effectiveness of the validation process; and reporting of validation results and any remedial actions;
    6. (vi) standards for measuring and monitoring model performance, including: the criteria to be used to measure model performance; the thresholds for acceptable model performance; criteria to be used to determine whether model recalibration or redevelopment is required when model performance thresholds are breached; processes for conducting root cause analyses to identify model limitations and systemic causes of performance deterioration processes for performing and use of back-testing; 
    7. (vii) the key model risk mitigants, including: the use of model adjustments and overrides to reflect use of expert-judgement; processes for restricting, prohibiting, or limiting a model’s use; how model validation exceptions are escalated;
    8. (viii) the model approval process and model change, including clear roles and responsibilities of dedicated model approval authorities, ie committee(s) and/or individual(s), the governance, validation, independent review, approval and monitoring procedures that need to be followed when a material change is made to a model; the materiality criteria to be used to identify the potential impact of prospective model changes.
  4. d) The SMF(s) with overall responsibility for the MRM framework should ensure the adequacy of board-level policies for material and complex model types is assessed and supplemented with more detailed model or risk specific procedures as necessary to deliver the firms overall risk appetite, including bespoke:
    1. (i) standards for model development, independent model validation and procedures for monitoring the performance of material and complex model types where the control processes associated with these models substantially differ from other model types; and
    2. (ii) data quality procedures for data intensive model types to set clear roles and responsibilities for the management of the quality of data used for model development.

Principle 2.4 Roles and responsibilities

Roles and responsibilities should be allocated to staff with appropriate skills and experience to ensure the MRM framework operates effectively.

  1. a) Firms should clearly document the roles and responsibilities for each stage of the model lifecycle together with the requisite skills, experience and expertise required for the roles, and the degree of organisational independence and seniority required to perform the role effectively.
  2. b) Responsibility for model performance monitoring and the reassessment of already implemented models should be clearly defined and may be undertaken by model owners, users, or developers. The adequacy of model performance monitoring is a key area of consideration by model validators. 
  3. c) Model owners should be documented for all models. Model owners are accountable for ensuring that:
    1. (i) the model's performance is monitored against the firm’s board-approved risk appetite for acceptable model performance;
    2. (ii) the model is assigned to the correct model tier; has undergone appropriate validation in accordance with the model tier; and providing all necessary information to enable validation activities to take place;
    3. (iii) the model is recorded in the model inventory and information about the model is accurate and up-to-date.
  4. d) Model users should be documented for all models. Model users are accountable for ensuring that:
    1. (i) the model use is consistent with the intended purpose;
    2. (ii) known model limitations are taken into consideration when the output of the model is used.
  5. e) Model developer(s) should be documented for all models. Model developers are accountable for ensuring that the research, development, evaluation, and testing of a model (including testing procedures, model selection, and documentation) are conducted according to firms’ own standards.
  6. f) Model validation should be performed by staff:
    1. (i) that have the requisite technical expertise and sufficient familiarity with the line of business using the model, to be able to provide an objective, unbiased and critical opinion on the suitability and soundness of models for their intended use;
    2. (ii) that have the necessary organisational standing and incentives to report model limitations and escalate material control exceptions and/or inappropriate model use in a prompt and timely manner. 

Principle 2.5 Internal Audit

  1. a) Internal Audit (IA) should periodically assess both the effectiveness of the MRM framework over each component of the model lifecycle, as well as the overall effectiveness of the MRM framework and compliance with internal policies. The findings of IA’s assessments should be documented and reported to the board and relevant committees on a timely basis.
  2. (b) The IA review should independently verify that:
    1. (i) internal policies and procedures are comprehensive to enable model risks to be identified and adequately managed;
    2. (ii) risk controls and validation activities are adequate for the level of model risks;
    3. (iii) validation staff have the necessary experience, expertise, organisational standing, and incentives to provide an objective, unbiased, and critical opinion on the suitability and soundness of models for their intended use and to report model limitations and escalate material control exceptions and/or inappropriate model use in a prompt and timely manner; and
    4. (iv) model owners and model risk control functions comply with internal policies and procedures for MRM, and those internal policies and procedures are in line with the expectations set out in this SS.

Principle 2.6 Use of externally developed models, third-party vendor products

  1. a) In line with PRA SS2/21 – Outsourcing and third party risk management[13] boards and senior management are ultimately responsible for the management of model risk, even when they enter into an outsourcing or third-party arrangement.
  2. b) Regarding third-party vendor models, firms should:
    1. (i) satisfy themselves that the vendor models have been validated to the same standards as their own internal MRM expectations;
    2. (ii) verify the relevance of vendor supplied data and their assumptions; and
    3. (iii) validate their own use of vendor products and conduct ongoing monitoring and outcomes analysis of vendor model performance using their own outcomes.
  3. c) Subsidiaries using models developed by their parent-group[14] may leverage the outcome of the parent-group’s validation of the model if they can:
    1. (i) demonstrate that the parent-group has implemented an MRM framework and model development and validation standards in line with the expectations set out in this SS;
    2. (ii) verify the relevance of the data and assumptions for the intended application of the model by the subsidiary; and
    3. (iii) ensure the intensity and rigour of model validation is adequate for the model tier classification determined relative to the risk profile of the subsidiary on a standalone basis.[15]

Footnotes

Principle 3 – Model development, implementation, and use

Firms should have a robust model development process with standards for model design and implementation, model selection, and model performance measurement. Testing of data, model construct, assumptions, and model outcomes should be performed regularly in order to identify, monitor, record, and remediate model limitations and weaknesses.

Principle 3.1 Model purpose and design

  1. a) All models should have a clear statement of purpose and design objective(s)[16] to guide the model development process. The design of the model should be suitable for the intended use, the choice of variables and parameters should be conceptually sound and support the design objectives, the calculation parameter estimates and mathematical theory should be correct, and the underlying assumptions of the model should be reasonable and valid.
  2. b) The choice of modelling technique should be conceptually sound and supported by published research, where available, or generally accepted industry practice where appropriate. The output of the model should be compared with the outcomes of alternative theories or approaches and benchmarks, where possible.
  3. c) Particular emphasis should be placed on understanding and communicating to model users and other stakeholders the merits and limitations of a model under different conditions and the sensitivities of model output to changes in inputs.

Footnotes

  • 16. The design objective(s) represent model performance target metrics such as measures for robustness, stability, and accuracy for accounting provisions or pricing models, discriminatory power for rating systems, and may represent a certain degree of predetermined conservatism for capital adequacy measures.

Principle 3.2 The use of data

  1. a) The model development process should demonstrate that the data used to develop the model are suitable for the intended use; and are consistent with the chosen theory and methodology; are representative of the underlying portfolios, products, assets, or customer base the model is intended to be used for.
  2. b) The model development process should ensure there is no inappropriate bias in the data used to develop the model, and that usage of the data is compliant with data privacy and other relevant data regulations.
  3. c) When the data used to develop the model are not representative of a firm’s underlying portfolios, products, assets, or customer base that the model is intended to be used for, the potential impact should be assessed, and the potential limitation should be taken into account in the model’s tier classification to reflect the higher model uncertainty. Model users and model owners should be made aware of any model limitations.
  4. d) Any adjustments made to the data used to develop the model or use of proxies to compensate for the lack of representative data should be clearly documented and subject to validation. The assumptions made, factors used to adjust the data, and rationale for the adjustment should be independently validated, monitored, reported, analysed, recorded in the model inventory, and documented as part the model development process.
  5. e) Interconnected data sources and the use of alternative and unstructured data[17] should be identified and recorded in the model inventory, and the complexity introduced by interconnected data and increased uncertainty of alternative and unstructured data should reflect in the model’s tier classification to ensure the appropriate level of rigour and scrutiny is applied in the independent validation activities of the model.

Footnotes

  • 17. Data, usually unstructured and non-financial data, not typically used in financial modelling such as social media feeds. The data are unstructured in the sense that they do not have a defined data model or pre-existing structure.

Principle 3.3 Model development testing

  1. a) Model development testing should demonstrate that a model works as intended. It should include clear criteria (tests) as a basis to measure a model’s quality (performance in development stage) and to select between candidate models. Model developers should provide the key outline of the monitoring pack (set of tests or criteria) that will be used to monitor a model’s ongoing performance during use. 
  2. b) Model development testing should assess model performance against the model’s design objective(s) using a range of performance tests.
    1. (i) From a backward-looking perspective, performance tests should be conducted using actual observations across a variety of economic and market conditions that are relevant for the model’s intended use.
    2. (ii) From a forward-looking perspective, performance tests should be conducted using plausible scenarios that assess the extent to which the model can take into consideration changes in economic and market conditions, as well as changes to the portfolio, products, assets, or customer base, without model performance deteriorating below acceptable levels. This should include sensitivity analysis[18] to determine the model operating boundaries under which model performance is expected to be acceptable.
    3. (iii) Where practicable, performance tests should also include comparisons of the model output with the output of available challenger models, which are alternative implementations of the same theory, or implementations of alternative theories and assumptions. The extent to which comparisons against challenger models or other benchmarks have been conducted should be considered in the model’s tier classification to reflect the higher model uncertainty.
  3. c) Model development testing should also be conducted for material model changes, including material changes over a period of time in dynamic models (ie models able to adapt, recalibrate, or otherwise change autonomously in response to new inputs[19]), and should include a comparison of the model output prior to the change and the corresponding output following the change to actual observations and outcomes (ie parallel outcomes analysis).

Footnotes

  • 18. Evaluating a model's output over a range of input values or parameters.
  • 19. Including where parameters and/or hyperparameters are automatically recalculated over time.

Principle 3.4 Model adjustments and expert judgement

  1. a) Firms should be able to demonstrate that risks relating to model limitations and model uncertainties[20] are adequately understood, monitored, and managed, including through the use of expert judgement. 
  2. b) The model development process should consider the need to use expert judgement to make model adjustments that modify any part of a model (input, assumptions, methodology, or output) to address model limitations.
  3. c) Where the model development process identifies a need for such model adjustments, the adjustments should be adequately justified and clearly recorded in the model inventory. The model inventory should record the decisions taken relating to the reasons for model adjustments, and how the adjustments should be calculated over time. The implementation of those decisions should be appropriately documented and subject to proper governance, including ongoing independent validation.
  4. d) Where such adjustments are made either to a feeder model whose output is the input of another model or to a sub-model of a system of models, the impact of those adjustments on related models should also be assessed and the relevant model owners and users should be made aware of the potential impact of those adjustments. Where the adjustment is material, it should be the subject of the independent validation process of both models.
  5. e) Where firms use conservatism to address model uncertainties, it should be intuitive from a business and economic perspective, adequately justified and supported by appropriate documentation, and should be consistent with the model’s design objectives.
  6. f) Model owners or developers should be able to demonstrate a clear link between model limitations and the reasons for model adjustments, and be responsible for developing and implementing clear remediation plans to address the model limitations.
  7. g) Firms should have a process to consider whether the materiality of model adjustments or a trend of use of recurring model adjustments for the same model limitations are indicative of flawed model design or misspecification in the model construct, and consider the need for remedial actions to the extent of model recalibration or redevelopment.

Footnotes

  • 20. Model uncertainty should be understood as the inherent uncertainty in the parameter estimates and results of statistical models, including the uncertainty in the results due to model choices or model misuse.

Principle 3.5 Model development documentation

  1. a) Firms should have comprehensive, and up-to-date documentation on the design, theory, and logic underlying the development of their models. Model development documentation should be sufficiently detailed so that an independent third party with the relevant expertise would be able to understand how the model operates, to identify the key model assumptions and limitations, and to replicate any parameter estimation and model results. Firms should ensure the level of detail in the documentation of third-party vendor models is sufficient to validate the firm’s use of the model.
  2. b) The model documentation should include:
    1. (i) the use of data: a description of the data sources, any data proxies, and the results of data quality, accuracy, and relevance tests;
    2. (ii) the choice of methodology: the modelling techniques adopted and assumptions or approximations made, details of the processing components that implement the theory, mathematical specification, numerical and statistical techniques;
    3. (iii) performance testing: details of the tests or criteria that will be used to monitor the model’s ongoing performance during use, and the rationale for the choice of tests or criteria selected; and
    4. (iv) model limitations and use of expert judgement: the nature and extent of model limitations, justification for using any model adjustments to address for model limitations, and how those adjustments should be calculated over time. 

Principle 3.6 Supporting systems

  1. a) Models should be implemented in: information systems or environments that have been thoroughly tested for the intended model purposes and/or the systems for which models have been validated and approved. The systems should be subject to rigorous quality control and change control processes. The findings of any system and/or implementation tests should be documented.
  2. b) Firms should periodically reassess the suitability of the systems for the model purposes and take appropriate remedial action as needed to ensure suitability.

Principle 4 – Independent model validation

Firms should have a validation process that provides ongoing, independent, and effective challenge to model development and use. The individual/body within a firm responsible for the approval of a model should ensure that validation recommendations for remediation or redevelopment are actioned so that models are suitable for their intended purpose.

Principle 4.1 The independent validation function

  1. a) Firms should have a validation function to provide an objective, unbiased, and critical opinion on: the suitability and soundness of models for their intended use; the design and integration of the system supporting the model; the accuracy, relevance and completeness of the development data; and the output and reports used to inform decisions.
  2. b) The validation function should be responsible for the (i) independent review and (ii) the periodic re-validation of models, and should provide their recommendations on model approvals to the appropriate model approval authority.
  3. c) While model owners are responsible for model performance, model users, owners, and validators share the responsibility for (i) ongoing model performance monitoring and (ii) process verification.
  4. d) The validation function should operate independently from the model development process and from model owners. Firms that have approval to use internal models for the purposes of calculating their capital requirements are expected to demonstrate independence through separate reporting lines for model validators and model developers and owners, applicable across the MRM framework.
  5. e) The validation function should have sufficient organisational standing to provide effective challenge and have appropriate access to the board and/or board committees to escalate concerns around model usage and MRM in a prompt and timely manner.

Principle 4.2 Independent review

  1. a) All models should be subject to an independent review to ensure that models are suitable for their intended use. The independent review should:
    1. (i) cover all model components, including model inputs, calculations, and reporting outputs;
    2. (ii) assess the conceptual soundness of the underlying theory of the model, and the suitability of the model for its intended use;
    3. (iii) critically analyse the quality and extent of model development evidence, including the relevance and completeness of the data used to develop the model with respect to the underlying portfolios, products, assets, or customer base the model is intended to be used for;
    4. (iv) evaluate qualitative information and judgements used in model development, and ensure those judgements have been conducted in an appropriate and systematic manner, and are supported by appropriate documentation; and 
    5. (v) conduct additional testing and analysis as necessary to enable potential model limitations to be identified and addressed on a timely basis, and to review the developmental evidence of the sensitivity analysis conducted by model developers to confirm the impact of key assumptions made in the development process and choice of variables made during the model selection stage on the model outputs.
  2. b) The nature and extent of independent review should be determined by the model tier and, where the validation regards a model change, commensurate to the materiality of the model change.

Principle 4.3 Process verification

  1. a) Firms should conduct appropriate verification of model processes and systems implementation to confirm that all model components are operating effectively and are implemented as intended. This should include verification that:
    1. (i) model inputs – internal or external data used as model inputs are representative of the underlying portfolios, products, assets or customer base the model is intended to be used for, and compliant with internal data quality control and reliability standards;
    2. (ii) calculations – systems implementation (code), integration (processing), and user developed applications are accurate, controlled and auditable; and
    3. (iii) reporting outputs – reports derived from model outputs are accurate, complete, informative, and appropriately designed for their intended use.

Principle 4.4 Model performance monitoring

  1. a) Model performance monitoring should be performed to assess model performance against thresholds for acceptable model performance based on the model testing criteria used during the model development stage.
  2. b) Firms should conduct ongoing model performance monitoring to:
    1. (i) ensure that parameter estimates and model constructs are appropriate and valid;
    2. (ii) ensure that assumptions are applicable for the model’s intended use;
    3. (iii) assess whether changes in products, exposures, activities, clients, or market conditions should be addressed through model adjustments, recalibration, or redevelopment, or by the model being replaced; and 
    4. (iv) assess whether the model has been used beyond the intended use and whether this use has delivered acceptable results.
  3. c) A range of tests should form part of model monitoring, including those determined by model developers:
    1. (i) benchmarking – comparing model estimates with comparable but alternative estimates;
    2. (ii) sensitivity testing – reaffirming the robustness and stability of the model;
    3. (iii) analysis of overrides – evaluate and analyse the performance of model adjustments made;
    4. (iv) parallel outcomes analysis – assessing whether new data should be included in model calibration.
  4. d) Model monitoring should be conducted on an ongoing basis with a frequency determined by the model tier.
  5. e) Firms should produce timely and accurate model performance monitoring reports that should be independently reviewed, and the results incorporated into the procedures for measuring model performance (as per Principle 2.3(c)(vi)).

Principle 4.5 Periodic revalidation

  1. a) Firms should undertake regular independent revalidation of models (typically less detailed than the validation applied during initial model development) to determine whether the model has operated as intended, and whether the previous validation findings remain valid, should be updated, or whether validation should be repeated or augmented with additional analysis.
  2. b) The periodic revalidation should be carried out with a frequency that is consistent with the model tier.

Principle 5 – Model risk mitigants

Firms should have established policies and procedures for the use of model risk mitigants when models are under-performing, and should have procedures for the independent review of post-model adjustments.

Principle 5.1 Process for applying post-model adjustments[21]
  1. a) Firms should have a consistent firm-wide process for applying post-model adjustments (PMAs) to address model limitations where risks and uncertainties are not adequately reflected in models or addressed as part of the model development process. The process should be documented in firms’ policies and procedures, and should include a governance and control framework for reviewing and supporting the use of PMAs, the implementation of decisions relating to how PMAs should be calculated, their completeness, and when PMAs should be reduced or removed.
  2. b) The processes for applying PMAs may vary across model types but the intended outcomes of each process should be similar for all model types and focused on ensuring that there is a clear rationale for the use of PMAs to compensate for model limitations, and that the approach for applying PMAs is suitable for their intended use.
  3. c) PMAs for material models or portfolios should be documented, supported by senior management, and approved by the appropriate level of authority (eg senior management, risk committee, audit committee).
  4. d) PMAs should be applied in a systematic and transparent manner. The impact of applying PMAs should be made clear when model results are reported for use in decision making with model results being presented with and without PMAs.
  5. e) All PMAs should be subject to an independent review with intensity commensurate to the materiality of the PMAs. As a minimum, the scope of review should include:
    1. (i) an assessment of the continued relevance of PMAs to the underlying portfolio;
    2. (ii) qualitative reasoning[22] – to ensure the underlying assumptions are relevant, the soundness of the underlying reasoning, and to ensure both are logically and conceptually sound from a business perspective;
    3. (iii) inputs – to ensure the integrity of data used to calculate the PMA, and to ensure that the data used is representative of the underlying portfolio;
    4. (iv) outputs – to ensure the outputs are plausible; and
    5. (v) root cause analysis – to ensure a clear understanding of the underlying model limitations, and whether they are due to significant model deficiencies that require remediation.
  6. f) PMAs should be supported by appropriate documentation. As a minimum, documentation should include:
    1. (i) a clear justification for applying PMAs;
    2. (ii) the criteria to determine how PMAs should be calculated and how to determine when PMAs should be reduced or removed;
    3. (iii) triggers for prolonged use of PMAs to activate validation and remediation;
  7. g) Firms should have a process to consider whether the materiality of PMAs, or a trend of use of recurring PMAs for the same model limitations are indicative of flawed model design, or misspecification in the model construct, and consider the need for remedial actions to the extent of model recalibration or redevelopment to remediate underlying model limitations and reduce reliance on PMAs.

Footnotes

  • 21. Post-model adjustments (PMAs) will refer to all model overlays, management overlays, model overrides, or any other adjustments made to model output where risks and uncertainties are not adequately reflected in existing models.
  • 22. Expert judgement will make use of more qualitative and expert reasoning to arrive at an estimate due to the lack of empirical evidence to use as basis for a quantitative calculation to produce the estimate.

Principle 5.2 Restrictions on model use

  1. a) Firms should consider placing restrictions on model use when significant model deficiencies and/or errors are identified during the validation process, or if model performance tests show a significant breach has or is likely to occur, including:
    1. (i) permitting the use of the model only under strict controls or mitigants; and
    2. (ii) placing limits on the model’s use including prohibiting the model to be used for specific purposes.
  2. b) The process of managing significant model deficiencies, or inadequate model performance should be adequately documented and reported to key stakeholders (model owners, users, validation staff, and senior management), including recording the nature of the issues and tracking the status of remediation in the model inventory.

Principle 5.3 Exceptions and escalations

  1. a) For material models, firms should formulate the exceptions[23] they would allow for model use and model performance, and should implement formally approved policies and procedures setting out the escalation procedures to be followed and to manage these exceptions.
    1. (i) Exceptions for model use should be temporary, should be subject to post-model adjustments (PMAs), should be reported to and supported by stakeholders and senior management.
    2. (ii) For model performance exceptions, firms should have clear guidelines for determining a maximum tolerance on model performance exceptions (deviation from expectation), should be subject to appropriate risk controls (eg the use of alternative models, heightened review and challenge, and more frequent monitoring post-model adjustments) and mitigants (eg recalibrating or redevelopment of existing methodology) once defined triggers and thresholds are breached.
  2. b) Firms should have escalation processes in place so that the key stakeholders (model owners, users, validation staff, and senior management) are promptly made aware of a model exception.

Footnotes

  • 23. Exceptions are defined here as using a model when not approved for usage by the appropriate oversight entity or not validated for use; a model is used outside its intended purpose; a model that displays persistent breach of performance metrics continues to be used; or back testing suggests the model results are inconsistent with actual outcomes.