Select Page

Artificial Intelligence and the Financial Industry

Overview

Topics to consider in your policies regarding the use of Generative Artificial Intelligence (AI):

  1. Legal considerations—The legal landscape with AI is moving fast and there are very real questions regarding the legality of generative AI usage, such as whether large language models captured appropriate consents to web scrape as part of their training data. This has led to lawsuits. Additionally, AI-produced content may adversely affect your ability to copyright, to patent, or to enforce trade secret protections or it may include material that violates the intellectual property rights of others. Prior to deploying generative AI, a bank should consider the above and other possible legal risks.
  2. Compliance considerations— Regulators expect banks to conduct a robust risk management analysis and to implement appropriate internal controls to mitigate the resulting risks of generative AI. The risk management includes a thorough review of the use of models, both those developed in-house and ones sourced from third-party vendors. Examiners scrutinize policies and procedures and look at the data to ensure banks are complying with consumer protection and anti-discrimination laws when they utilize AI technology. Many of the risks resulting from generative AI are similar to prior iterations of the technology, but it has become more difficult to flag, analyze, and account for those risks due to its dynamic and exponential nature.

Importantly, the regulatory guidance at this stage is technology neutral, making it clear that existing requirements such as Fair Lending and complaints resolution apply whether or not AI is part of the process. Recent examples of regulatory guidance are the Joint Statement from the Consumer Financial Protection Bureau, Justice Department, Equal Employment Opportunity Commission, and Federal Trade Commission, the Advisory from the CFPB on chatbots in consumer finance, and the interagency guidance on Third Party Risk Management.

  1. Privacy considerations—the training data going into an AI model, and the outputs of an AI model, are closely tied to privacy considerations. These will have to be reviewed by second-line functions to ensure they comport with privacy policies, data governance policies and procedures, confidentiality requirements, and information security programs.
  2. Reliability and bias—AI may rely on data that is skewed or incorrect with validation and data tracing being difficult. This may lead to false conclusions, subtle errors, or prejudicial results. Banks have long been examined by regulators for model risk management and should build upon this solid foundation for generative AI applications.
  3. Scoping Issues—The AI may reach out beyond the intended remit and may take actions not explicitly authorized. Consequences may include the commission of a cybercrime (e.g., unauthorized access), execution of functions/processes which modify data (i.e., not just collect data), and learn from/train on unreliable data (e.g., test data, raw data) as if it were production data. Further, generative AI is known to experience “hallucinations” or to yield inaccurate results—any use cases must be explicitly designated with appropriate controls put in place.

Any of the above may occur without adequate traceability or visibility making detection and correction difficult or infeasible.

  1. Best practices—Due to the evolving nature of the technology and the unprecedented scrutiny by policymakers, it will be especially important to document decisions and have strong governance structures in place to show that the above factors (as well as others that are pertinent) were weighed and controlled for. This is why many banks are issuing policies and procedures, conducting employee training, and offering customer education campaigns regarding generative AI offerings, especially unlicensed versions. When engaging with a third party, banks must be aware that they cannot outsource compliance risk and that they remain responsible for the activities of their vendors. For this reason, it is vital to have contractual provisions in place to require the vendor to act a certain way and to provide notice to the bank if appropriate. The existing enterprise privacy program and data breach response framework may be of use in creating and maturing the AI workstreams.
  2. Safety and Security—Generative AI models follow instructions or “prompts” from the user and follow the data they ingest or train on as standard operating functions, which give them their human-like ability to predict and produce deliverables. This inherent operational functionality makes them vulnerable to malicious manipulation through prompt injection, data poisoning, and malware insertion even in the presence of preset guardrails. To mitigate these risks, financial institutions need to include Safety and Security procedures that create an enduring vigilance. Be prudent in the adoption of generative AI, carefully monitor the submission of business and customer data, as well as unusual AI-enabled behaviors to strengthen cybersecurity and social engineering protections.

***

X9 is developing guidance for AI use within the financial services industry.

Request to Participate Form AI Study Group (#31)