Robots Are Coming: Artificial Intelligence in Financial Services

The Bank of England, in conjunction with the FCA and the PRA (the “Regulators”), posted a work document on the use of artificial intelligence (AI) and machine learning (ML) in financial services.

The discussion paper examines the potential benefits and risks associated with AI in the industry as well as its perspective on the current regulatory framework that governs the use of AI by financial services firms. The paper aims to deepen regulators’ understanding and deepen the dialogue on how AI can affect their regulatory goals, ultimately ensuring safe adoption of AI. The paper is part of a broader program of AI-related work, including the Public-Private Forum on AI and Government guidance document on “Establishing a pro-innovation approach to regulating AI”.

What is AI?

The paper does not provide a definition of AI, but rather discusses the benefits and risks associated with the AI ​​definition approach. It recognizes that “algorithm” and “model” already have specific meanings in financial services regulation and that while both can be components of an AI system, they are not necessarily considered to be. AI in themselves.

The two general approaches highlighted are:

  • provide a more precise legal definition of what AI is; Where
  • view AI as part of a broader spectrum of analytical techniques with a range of elements and characteristics. These approaches can include a classification scheme to encompass different methodologies or a mapping of AI characteristics.

Each approach seeks to clarify what constitutes AI in the context of a specific regulatory regime and, therefore, what restrictions and requirements may apply to the use of the technology.

The benefits of regulators adopting a more precise definition of AI include creating a common language for participants, harmonizing regulatory responses to AI, and clarifying whether use cases within the scope of application (i.e. giving certainty to the regulatory perimeter). However, challenges are also recognized, including the robustness of a definition in the face of rapid technology development, the risk of a definition being too broad, missed use cases, and potential misclassification by companies to reduce the regulatory oversight.

Given the risks, regulators consider an alternative approach might be more appropriate for regulating AI in UK financial services.

Where is AI being used in financial services?

In addition to the paper, a joint report on Machine Learning in UK Financial Services, which examines companies’ ML implementation strategies, found that the number of ML applications used in UK financial services continues to grow .

By conducting surveys of financial services companies, he found that respondents in the banking and insurance industries had the highest number of ML applications. Other types of firms surveyed with ML applications included MFIs and payment firms, non-bank lenders, and investment and capital markets firms.

In terms of the range of ML use cases, the report found that companies are developing or using ML in most business domains:

  • “Customer Engagement” and “Risk Management” continue to be the areas with the most applications;
  • The “miscellaneous” category, which includes business areas such as human resources and legal services, had the third highest proportion of ML applications; and.
  • The business areas with the fewest ML applications are ‘investment banking’ and ‘treasury’.

While this is the current state of affairs, as the technology and enterprise usage expands, we are likely to see more use cases and implementations across the industry.

Benefits and risks

AI can bring significant benefits to consumers, financial services firms, financial markets and the broader economy, making financial services and markets more profitable, efficient, accessible and responsive to consumer needs.

However, AI can pose new challenges, create new risks, or amplify existing ones. To support the safe and responsible adoption of AI technologies in UK financial services, regulators suggest they may need to step in to mitigate the risks and potential harms associated with AI applications. However, the importance of a proportionate approach is recognised.

Regarding risks, the document notes that the main risk factors related to AI in financial services relate to three key stages of the AI ​​life cycle:

  • Data – the entrance. Since AI relies significantly on large volumes of data in its development (training and testing) and implementation, data risks can be magnified and have significant implications for business systems. AI;
  • Models – the treatment. These could include inappropriate model choices, errors in model design or construction, lack of explainability, unexpected behavior, unintended consequences, degradation of model performance, and model or concept drift. ; and
  • Governance – The surveillance. Risk factors here include the lack of clearly defined roles and responsibilities for AI, insufficient skills, governance functions that do not include relevant business areas or consider relevant risks (such as as ethical risks), a lack of challenge at the board and executive level, and a general lack of accountability.

Depending on how AI is used in financial services, issues at each of the three stages can lead to a range of outcomes and risks relevant to financial services regulation.

In terms of where these risks may materialise, the Bank of England has identified the following areas as particularly relevant:

  • Consumer protection – in particular in the event of bias and discriminatory (illegal) decisions;
  • Competition – with problems of market barriers due to entry costs;
  • Security and solidity – amplification of prudential risks (including credit, liquidity, market, operational and reputational risks);
  • Protection of policyholders – inappropriate pricing and marketing, conceptual drift and lack of explainability, inaccurate forecasts and reserve levels; and
  • Financial stability and market integrity – including concerns about consistency across models, herd, and flash bubbles.

Despite the risks, AI can bring a number of benefits in each of the areas identified above and companies should consider ways to mitigate the identified risks when developing and deploying their AI systems.

Existing regulatory framework

One of the challenges to the adoption of AI in UK financial services is the lack of clarity surrounding current rules, regulations and principles, in particular how these apply to AI and what this means for business at the practical level. The Bank of England has sought to address this challenge in its paper by discussing some parts of the current regulatory framework that it considers most relevant to the regulation of AI.

Given the technology-independent nature of UK financial services regulation, there is no single source of AI regulation. Instead, the document recognizes a wide range of legislative and regulatory frameworks that could apply in the context of AI, including (but not limited to):

  • FCA’s new Consumer Duty;
  • FCA guidance on vulnerable customers;
  • FCA PROD Sourcebook Rules;
  • The Equality Act 2010;
  • UK Data Protection Law – including the Data Protection and Digital Information Bill;
  • Competition law ; and
  • Anti-money laundering legislation.

The document also highlights a number of more specific sets of rules and guidance that may apply depending on the business in question. Although there are no big surprises in the list of potentially applicable regulations, one of the main takeaways is the extent of the distinct regulatory frameworks that may apply. Companies will therefore want to ensure that they have undertaken a full analysis of their regulatory obligations when they intend to develop and use AI as part of their business models to ensure that they do not are not faulted.

In addition to the discussion points above, the paper raises a number of questions on which it seeks input from a wide range of market players and stakeholders. Comments can be made on the working document until February 10, 2023.

Artificial intelligence (AI) and machine learning (ML) are rapidly developing technologies that have the potential to transform financial services. The promise of this technology is to make financial services and markets more efficient, accessible and responsive to consumer needs. This can bring significant benefits to consumers, financial services firms, financial markets and the wider economy.

Comments are closed.