Fair Lending in the Era of Artificial Intelligence

Artificial Intelligence (AI) holds extraordinary potential to revolutionize credit allocation and risk assessment, paving the way for more equitable outcomes for society. Still, despite its capabilities, there is concern that AI might perpetuate existing biases in credit practices, further exacerbating discriminatory lending. This apprehension stems from perceptions that AI suffers from a lack of data and decision-making transparency, calling its transformative promise into question. 


Fair Lending

In an ideal world, credit allocation would solely depend on borrower risk, a principle known as "risk-based pricing." Lenders would assess a borrower’s genuine risk and establish charges accordingly. Today, AI is capable of integrating diverse data sources to aid in discerning authentic risk, thereby broadening fair access to credit. For instance, AI can unveil novel connections between credit risk and tangible factors like rent and utility bill payments, or even shopping behaviors — insights traditional lenders might overlook. In doing so, it aligns with our existing system's objectives: pricing financial services based on individual risk while avoiding harmful discrimination. 

Discrimination in Fair Lending can have a disparate impact on different groups of lenders. The Equal Credit Opportunity Act of 1974 (ECOA), designed to preserve equitable access to credit and guard against discrimination, lists a series of protected characteristics that cannot be considered in decisions about credit and interest rates. These include race, gender, national origin, and age, as well as other factors, like whether an individual receives public assistance. As AI adoption increases, there are two major ways that discrimination can creep into lending systems employing statistics or AI-based techniques for credit decision-making. 


Unintended – Data Bias

While financial institutions (FIs) have historically used criteria such as income, debt, FICO, and credit history in determining whether and at what rate to provide credit, these factors can correlate with protected classes like race, age, and gender. FIs may consider these factors within established guardrails because lenders need information about customers’ financial health and making determinations about who is likely to repay a loan is a legitimate business requirement. According to the Consumer Financial Protection Bureau (CFPB), disparate impact exists when, “a creditor employs facially neutral policies or practices that have an adverse effect or impact on a member of a protected class unless it meets a legitimate business need that cannot reasonably be achieved by means that are less disparate in their impact.” 

The emergence of AI introduces new complexities to this dynamic. AI not only draws from historically biased data like income and debt but also relies on new data types that can inadvertently serve as proxies for discrimination. For example, despite prohibitions against using gender for credit decisions, numerous proxies — from purchasing decisions, like personal care products, to Netflix preferences — can reflect gender-related patterns. This raises three specific questions: 

  • Where should lines be drawn on data usage? 
  • While these new relationships between the lending decision and personal care choice have causal properties or proxies for other correlated factors, can they affect the legality and ethics of AI use? 
  • Once the data is processed, can the AI produce explainable outcomes to help build trust in algorithms? 

As AI algorithms reveal new discrimination risks, the fundamental concept of risk-based lending will face new scrutiny. Regulators and government bodies must quickly establish consistent rules and comprehensive frameworks for new technologies. A broader dialogue addressing if and where an acceptable threshold of bias can exist would also benefit all stakeholders involved. 


Intended – Redlining

Before computers and standardized underwriting, bank loans and other credit decisions were often based on personal relationships and frequently discriminated against racial and ethnic minorities. The practice of using mortgages to segregate neighborhoods based on race is called “redlining.” According to the Interagency Fair Lending Examination Procedures, redlining occurs when “[a]n institution provides unequal access to credit, or unequal terms of credit, because of the race, color, national origin, or other prohibited characteristic(s) of the residents of the area in which the credit seeker resides or will reside, or in which the residential property to be mortgaged is located”. 

AI’s potential to change lending patterns could drive heightened redlining risk. Bad actors can engage in redlining through overreliance on biased and automated credit decision-making, which can disproportionately deny loans to specific groups. These models are built upon feature engineering that may intentionally encode biases. For example, redlining does not necessarily require complete avoidance of an area and can exist whenever applicants are treated differently based on where they live. If certain attributes correlate with protected characteristics, as zip codes might correlate with race or national origin, their purposeful inclusion in a model can perpetuate bias.  

In the rush to realize AI-based efficiency gains, lenders may fail to adequately consider the ethical implications of algorithmic decision-making. Banks should consider regularly employing the following procedures to identify and mitigate potential redlining risks: 

  • Market Studies: Understand the market area and the demographics of the geographies within that area.  
  • Customer Sourcing: Evaluate loan application sourcing methods, including any marketing or outreach efforts and branches. 
  • Peer Group Performance Benchmarking: Assess lending performance within the market area compared to peers, examining application numbers and high denial or withdrawal rates in minority areas using statistical methods. 
  • Transparency: Establish transparency for borrowers and lenders to understand how AI-based credit decision-making operates. 

Model Explainability and Interpretability as an Answer

AI explainability and interpretability can present a pivotal remedy for both unintentional and deliberate discrimination in credit decisions. By identifying key attributes influencing model outcomes, examining both individual decision points and the bigger picture, transparent decision records enable continuous monitoring and guard against illegal discrimination, even in models influenced by biased historical data or unethical human motives. If a lender systematically denies credit on pretexts related to race or gender, explainability compels them to disclose these pretexts and provides regulators, consumers, and advocates with the information necessary for legal action. Model explainability can help unlock the benefits of the AI revolution, offering an escape from the cycle of using credit as a discriminatory tool. This is a step towards responsible AI use.