Tens of thousands oppose proposed HUD rule enumerating affirmative defenses for housing discrimination caused by algorithm models.

LEGAL •TECHNOLOGY • BUSINESS 11.25.19

UPDATED 01.24

This past August, HUD (the U.S. Department of Housing and Urban Development) released a proposed rule that "creates three affirmative defenses for housing providers, banks, and insurance companies that use algorithmic models to make housing decisions," The Electronic Frontier Foundation reported.

These affirmative defenses would "effectively insulate" and "protect them even if an algorithm they used had a demonstrably discriminatory impact."

HUD said the defenses are "not meant to create a special exemption for parties using algorithmic models," and that "it wants to make it easier for companies to make practical business choices and profit-related decisions.”

Public comments on the rule closed in October. HUD hasn't given a timeline for its implementation.

Considerable opposition claims it weakens disparate impact protections. HUD claims it will strengthen it.

Over 45,000 individuals and organizations, including 13 former Department of Justice officials, 22 Attorneys General, the FTC Commissioner, and members of Congress, submitted comments to HUD in opposition, National Fair Housing Alliance reported.

Last week, Congresswoman Maxine Waters, wrote to HUD Secretary, Ben Carson, opposing the new rule for creating "safe harbors for defendants who use algorithms," which lets them "avoid liability."

The Fair Housing Act was put in place in 1968 to ensure equal housing opportunities. It protects individuals from housing discrimination on the basis of protected characteristics like race, color, sex, national origin, religion, familial status or disability. And it prohibits not only intentional discrimination, but also unintentional discrimination arising out of a seemingly neutral practice. For example, an algorithmic risk prediction model.

In 2013, the Obama administration issued a regulation to "ensure continued strength" of the Fair Housing Act, and make it "easier to enforce" in order to "combat housing discrimination by lenders, insurers, landlords and municipalities," The Wall Street Journal reported.

HUD' general counsel Paul Compton said the proposed rule would "create clearer legal standards in line" with Texas Department of Housing and Community Affairs v. Inclusive Communities Project. The U.S. Supreme Court decision that upheld the concept of disparate impact, but also said a plaintiff should "draw a causal relationship between statistical analysis and a specific policy."

He said disparate impact is a "nebulous concept and complex doctrine that, using statistics, could be used to challenge many, many actions," The Wall Street Journal reported.

"The housing market industry is "relying on algorithmic-based systems for more and more," National Fair Housing Alliance president, Lisa Rice, told The New York Times. They are widely used for rental screening, underwriting mortgages, determining insurance costs, and for targeted housing offers.

But they are "seldom designed to take protected characteristics into account, yet they still have the capacity for protected-class discrimination...Indeed, the type of discrimination that algorithmic models create is precisely the type of discrimination that the existing disparate impact test was designed to uncover." The Center for Democracy and Technology wrote.

Under the proposed rule, all defendants would have to show to defeat a claim at the prima facie stage is that "the model's inputs do not include close proxies for protected classes, that a neutral third party determined that the model has predictive value, or that a third party created the model," The Center for Democracy and Technology. Effectively, "most algorithmic models that caused cognizable disparate impacts" would not make it to judicial "fact-specific inquiry."

Sherrilyn Ifill, president and director counsel of the NAACP (National Association for the Advancement of Colored People) Legal Defense and Educational Fund, said it's “an incredible and extraordinary burden” that would make it “virtually impossible to prevail.” And the affirmative defenses given to defendants are “astonishing.” The New York Times reported.

"Defendants could defeat claims that target their computer models by showing that the model is a standard in the industry. 'This means that if an entire industry is engaged in discrimination, it essentially insulates the discriminatory conduct of an individual actor,' ” The New York Times.

HUD Secretary Ben Carson wrote in National Review, "these changes will lead to more innovation and an increase of lower-cost housing and related services."

They will "make disparate-impact liability work better and more fairly," and "provide plaintiffs with a roadmap for pleading stronger cases," Carson wrote.

Weakens disability-housing protections.

Automating decisions and eliminating "judgment calls" is the problem not the solution. When these tools leave people out of the process of deciding, they are "likely to collapse complex matters into simple, algorithmically generated pass-fail mechanisms, leaving behind people looking for a home," The Verge wrote.

Disability discrimination complaints make up the largest percentage of housing discrimination complaints. The proposed rule will make it easier for landlords to use algorithms to deny housing, and to prevent challenges to housing policies that disproportionately burden the disabled. Disability Scoop reported.

The blind, deaf or hard of hearing, autistic, and those with mental health and mobility impairments "face additional barriers securing affordable housing that is also accessible, " The Bazelon Center for Mental Health Law wrote.

"Compounding this concern," the disabled live in poverty at more than twice the rate of the general population. Securing affordable housing is an "acute problem" since only 32% are employed, compared to 73% of those not disabled, Bazelon Center for Mental Health Law.

“Non-elderly adults with significant disabilities in our nation are often forced into homelessness or segregated, restrictive, and costly institutional settings such as psychiatric hospitals, adult care homes, nursing homes, or jails," The Disability Rights Education Defense Fund wrote.

"The inability to preserve housing will not only put people with disabilities at risk of homelessness and institutionalization, but will likely increase costs to state and local governments, which will incur the costs of institutionalization, shelter placements, and emergency department visits," The Disability Rights Education Defense Fund.

Courts have ruled against the use of algorithms that led to "numerous problems" involving government disability rights.

Although ADS (Algorithmic Decision Systems) are heralded for their "promise" of cost reduction, accuracy, and for presumably reducing human bias and error, their use has resulted in "numerous problems" for the disabled, according to AI Now's Litigating Algorithms 2019 US Report.

The report goes over outcomes from "the first wave" of U.S. lawsuits, examined in AI Now's 2018 Report, and looks at new "collateral consequences of erroneous" uses of ADS.

"Increased reliance on algorithmic tools highlights how ADS are overly relied upon by case managers, state employees, and others such that the outcome of the tool is given greater weight than the individual's needs," Elizabeth Edwards, National Health Law Program via AI Now Litigating Algorithms 2019 US Report.

In K.W. ex rel. D.W. v. Armstrong, the State of Idaho used a new ADS to determine benefits for adults with intellectual and developmental disabilities. There were drastic reductions, "leading to horrific living conditions." The State used the "trade secret" algorithm defense, which led to a lawsuit.

The ACLU won on the merits in summary judgment during discovery. The Court found the formula "unconstitutionally arbitrary," ordered the State to fix it and to provide support and assistance for those impacted. The case was eventually settled, with the plaintiffs receiving injunctive relief and the State "held equally responsible for future implementations of the algorithms or formulas." The case is ongoing as the parties address "the exact scope and time frame for fixing the formula."

This case brought into question whether it was strategically better to challenge the formula or challenge the entire practice of using ADS for Medicaid benefits. ADS "overcame the bias of human-decision making" in some cases and "reinforced it" in others. Sometimes increasing benefits and others "dramatically" cutting them.

This made it "difficult to isolate the role of ADS from the culture and personnel of the State department administering them."

In Arkansas Department of Human Services. v. Ledgerwood, Arkansas used an algorithm to determine home health care hours for people with physical disabilities. The result was an abrupt reduction in patient care, leading patients to develop bedsores and live in unhygienic conditions.

The algorithm was invalidated after almost three years of litigation. The Court granted injunctive relief. Subsequently, the State started using a new system "with some improvements," which gave new assessments to nurses, allowing them "modest discretion" to determine patient care hours.

However, "new problems emerged with the administration of the assessment," which shows that accountability issues must "include the culture and personnel of the agencies," as well as the "ADS themselves."

Under the proposed rule, if defendants can show that the algorithm was developed by a third party or is considered an industry standard, the case would not move to judicial inquiry. The culture and personnel of the agencies administering the ADS would not be implicated in questions of accountability.

Elaine Sarduy is a freelance writer and content developer @Listing Debuts