Inventory
info icon
Single family homes on the market. Updated weekly.Powered by Altos Research
722,032+456
30-yr Fixed Rate30-yr Fixed
info icon
30-Yr. Fixed Conforming. Updated hourly during market hours.
7.00%0.01
MortgageTechnology

Fed considers regulating AI used by financial institutions

Fed want to ensure that biases are not being embedded into data sources

The Federal Reserve is looking into the rise of artificial intelligence and machine learning, and is considering stepping up its oversight of these technologies used by financial institutions.

Federal Reserve Governor Lael Brainard spoke at the AI Academic Symposium virtual event on Tuesday, hosted by the Board of Governors of the Federal Reserve, saying it is important that the use of AI by the financial community is leading to equitable outcomes for consumers.

In prepared remarks for the event, Brainard said:

“Recognizing that AI presents promise and pitfalls, as a banking regulator, the Federal Reserve is committed to supporting banks’ efforts to develop and use AI responsibly to promote a safe, fair, and transparent financial services marketplace. As regulators, we are also exploring and understanding the use of AI and machine learning for supervisory purposes, and therefore, we too need to understand the different forms of explainability tools that are available and their implications.

“To ensure that society benefits from the application of AI to financial services, we must understand the potential benefits and risks, and make clear our expectations for how the risks can be managed effectively by banks. Regulators must provide appropriate expectations and adjust those expectations as the use of AI in financial services and our understanding of its potential and risks evolve,” Brainard said.


Achieving Touchless Mortgage Automation

Effective solutions must be purpose-built for mortgages, rather than adapted horizontal technology. In this webinar, experts at SoftWorks AI and Tavant discuss critical components of a mortgage automation solution and how to evaluate technologies that best fit your business’ needs.

Presented by: SoftWorks AI

The Fed is currently exploring whether it needs to increase its supervisory presence in order to ensure the responsible adoption of AI., Brainard said. For now, it will be soliciting information from many stakeholders including financial services firms, technology companies, consumer advocates, civil rights groups, merchants, other businesses and the public.

“The Federal Reserve has been working with the other banking agencies on a possible interagency request for information on the risk management of AI applications in financial services,” Brainard said. “Today’s symposium serves to introduce a period of seeking input and hearing feedback from a range of external stakeholders on this topic.

“It is appropriate to be starting with the academic community that has played a central role in developing and scrutinizing AI technologies. I look forward to hearing our distinguished speakers’ insights on how banks and regulators should think about the opportunities and challenges posed by AI.”

Brainard mentioned several concerns posed by the use of AI, including unintentional built-in bias. In one example he gave, the Black community was disproportionately affected by a machine learning study in healthcare.

The model, designed to identify patients that would likely need high-risk care management in the future, used patients’ historical medical spending to determine future levels of medical needs. However, the historical spending data did not serve as a fair proxy, because “less money is spent on Black patients who have the same level of need, and the algorithm thus falsely concludes that Black patients are healthier than equally sick white patients.”

Brainard said it is important to ensure biases are not being embedded into data sources.

He also cautioned against the black box that could come from using machine learning in decision-making processes.

“Some of the more complex machine learning models, such as certain neural networks, operate at a level of complexity that offers limited or no insight into how the model works,” Brainard said. “This is often referred to as the ‘black box problem,’ because we can observe the inputs the models take in, and examine the predictions or classifications the model makes based on those inputs, but the process for getting from inputs to outputs is obscured from view or very hard to understand.”

But not all of his remarks on AI were negative. Brainard also pointed out some of the many benefits AI and machine learning have brought to mortgage finance, specifically. Some of those benefits include preventing fraud, which is rising higher than ever as more people are working from home, and analyzing alternative data for making credit decisions.

“Machine learning models are being used to analyze traditional and alternative data in the areas of credit decision making and credit risk analysis, in order to gain insights that may not be available from traditional credit assessment methods and to evaluate the creditworthiness of consumers who may lack traditional credit histories,” Brainard said.

“The Consumer Financial Protection Bureau has found that approximately 26 million Americans are credit invisible, which means that they do not have a credit record, and another 19.4 million do not have sufficient recent credit data to generate a credit score. Black and Hispanic consumers are notably more likely to be credit invisible or to have an unscored record than white consumers.”

Leave a Reply

Your email address will not be published. Required fields are marked *

Most Popular Articles

3d rendering of a row of luxury townhouses along a street

Log In

Forgot Password?

Don't have an account? Please