As those in the financial services industry have likely noticed, the application of artificial intelligence (AI) used in the financial marketplace has been a topic of increased discussion and scrutiny. While it might be considered an issue that has been running quietly in the background, it has definitely caught the attention of our regulators. And, is definitely worthy of our increased focus as well.
So how did we get to where we are today?
In looking back over the last couple of years for issuances related to the topic of AI, one important item stands out. In March 2021, the FRB, CFPB, FDIC, NCUA, and OCC published a “Request for Information” in the Federal Register. The request focused on collecting information and comments on AI, as well as Machine Learning (ML).
As the regulators opined, the use of AI has various potential benefits, such as improved efficiency, enhanced performance, greater accuracy, lower cost, and faster underwriting. It may also enhance an institution’s overall ability to provide products and services.
However, they noted that it is very important for institutions to be able to identify and manage potential risks associated with AI. Potential risks could include operational vulnerabilities, internal process or control breakdowns, cyber threats, and IT lapses. As they noted, “The use of AI can also create or heighten consumer protection risks, such as risks of unlawful discrimination, unfair, deceptive, or abusive acts or practices (UDAAP) under the Dodd-Frank Wall Street Reform and Consumer Protection Act (Dodd-Frank Act), unfair or deceptive acts or practices (UDAP) under the Federal Trade Commission Act (FTC Act), or privacy concerns.”
Moving forward, in May 2022, industry stakeholders might recall that the CFPB issued their Fair Lending Report. Within it, the Bureau announced the focus of their fair lending supervision on many various aspects, including artificial intelligence and machine learning. Of note, they also said that they "will be sharpening its focus on digital redlining and algorithmic bias.” As more and new technology platforms impact the marketplace, the Bureau is committed to identifying emerging risks and remains committed to protecting individuals and small businesses from discrimination, and to holding institutional bad actors accountable.
Most recently, in October 2022, the White House issued a “Blueprint for an AI Bill of Rights.” This proposed framework contains five principles guiding the design, use, and deployment of automated systems to protect the public. We encourage our clients to review the White House Administration’s Bill of Rights, paying particular attention to the chapter entitled “From Principles to Practice.” It provides great information on the expectations for automated systems and a rundown of key factors, which addresses consultation, testing, risk identification, monitoring, and oversight, among other things.
So what's a financial institution to do?
An important first step is to consider ways in which you currently utilize AI. While AI can be used in many capacities, you might consider whether you use it for:
- Identifying unusual transactions
- Personalizing customer service, including marketing
- Making credit decisions
- Augmenting risk management and control practices
- Cybersecurity
Next, consider whether your AI process and/or “footprint” has yet to be captured in any policies and procedures.
Another important component is to look at processes for identifying potential AI risks and what is being done to manage those risks. If nothing has been formulated related to your AI risk, now is the time to start. It’s important that this task involve the participation of your internal stakeholders, such as management, IT, and the business line.
Going forward, institutions should remain cognizant of ongoing developments in this area.
|