TThree hundred and twenty-four. That was the score Mary Louie was given by an AI-powered tenant screening tool. In its 11-page report, the software SafeRent does not explain how the score was calculated or how various factors were taken into account. There is no mention of what the score actually means. They just saw Louis’ numbers and decided it was too low. In the box next to the result, the report said “Score Recommended: DECLINE.”
Louis, who works as a security guard, had applied for an apartment in a suburban area in eastern Massachusetts. When she toured the room, the management company said there was no problem for her application to be accepted. Although she had bad credit and credit card debt, she had an excellent recommendation from her landlord of 17 years, who paid her rent on time. She also plans to use vouchers for low-income renters and ensure that management companies receive at least a portion of their monthly rent payments from the government. Her son, whose name was also on the voucher, also had a high credit score, indicating it could act as a backstop in case of missed payments.
But in May 2021, more than two months after she applied for the apartment, the management company sent Louis an email informing her that the computer program had rejected her application. Applications needed a score of at least 443 to be accepted. There was no further explanation and no way to appeal the decision.
“Mary, we regret to inform you that your housing offer has been denied due to a third-party service we use to screen all prospective housing applicants,” the email said. I did. “Unfortunately, the SafeRent tenant score for this service was lower than what our tenant standards would allow.”
tenant files suit
Louis ended up renting a more expensive apartment. Management there did not grade her based on an algorithm. But she learned that her experience at Saferent was not unique. She is one of more than 400 Black and Hispanic tenants on Housing Vouchers in Massachusetts who said their rental applications were rejected because of their safe rent scores.
In 2022, they banded together to sue SafeRent under the Fair Housing Act, alleging that it discriminated against them. Lewis and another named plaintiff, Monica Douglas, said the company’s algorithm unfairly scores Black and Hispanic renters using Housing Vouchers over white applicants. he claimed. They found that the software inaccurately assessed irrelevant account information (credit score, non-housing-related debt) on whether a tenant was a good tenant, but did not take into account whether they would use a housing voucher. he claimed. Research shows that black and Hispanic rent-seekers have lower credit scores and are more likely to use housing vouchers than white applicants.
“It was a waste of time waiting to be turned down,” Lewis said. “I knew my credit was bad, but AI doesn’t know what I do. It knew I was late on my credit card payments, I didn’t know I was paying.”
Two years have passed since the group first sued for safe rent. Lewis, who was one of two named plaintiffs, said she has moved on with her life and has largely forgotten about the lawsuit. But her action could protect other renters in a similar housing program, known as Section 8 vouchers, from losing their homes because of scores determined by algorithms.
Saferent settled with Mr. Lewis and Mr. Douglas. In addition to paying $2.3 million, the company agreed to stop using the scoring system or make some sort of recommendation to prospective tenants who used housing vouchers for five years. Although Saferent legally does not admit wrongdoing, it is unusual for a tech company to accept changes to its core product as part of a settlement. A more common outcome of such agreements is financial agreements.
“While SafeRent continues to believe that SRS scores comply with all applicable laws, litigation is time-consuming and costly,” company spokeswoman Yazmin Lopez said in a statement. “Defending SRS scores in this case would be a waste of time and resources that could be better used by SafeRent to fulfill its core mission of providing housing providers with the tools they need to screen applicants. It has become increasingly clear.”
New AI landlord
Tenant screening systems like SafeRent are often used as a way to “avoid” direct interaction with prospective tenants and shift responsibility for refusals to computer systems, said Louie and the plaintiffs in the lawsuit. said Todd Kaplan, one of the attorneys representing the company. company.
The property management company told Louis that it decided to deny her based solely on the software, but the SafeRent report says it did not set the criteria for what the score needed to be for an application to be accepted. was a management company.
Still, even for those involved in the application process, how the algorithm works is opaque. The property manager who showed Louis the apartment said he didn’t know why Louis was having trouble renting the apartment.
“They’re inputting a lot of information, and SafeRent is coming up with its own scoring system,” Kaplan said. “It becomes difficult to predict how people will see themselves on Safe Rent. Not only applying tenants, but even landlords, don’t know the details of Safe Rent scores.”
As part of Louie’s settlement with SafeRent, approved Nov. 20, the company will not use a scoring system or recommend accepting or rejecting tenants if they are using housing vouchers. I can no longer do that. If the company devises a new scoring system, it is required to have it independently verified by a third-party fair housing organization.
“By removing the thumbs up and down, tenants can really say, ‘I’m a great tenant,'” Kaplan said. “It allows for more personal decisions.”
AI extends to fundamental parts of life
One study found that nearly all of the 92 million people in the U.S. who are considered low-income are exposed to AI decisions in basic areas of their lives such as employment, housing, health care, education, and government assistance. It is said that there is
New report on the harms of AI
By Kevin de Liban, a lawyer who represented low-income people as a member of the Legal Aid Society. Founder of a new AI justice organization called
tectonic justice
Derivan began researching these systems in 2016, when he discovered that automated decision-making that reduced human input suddenly left state-funded home care out of reach for extended periods of time. This happened when I received a consultation from some patients. In one case, the state’s Medicaid payments depended on the program determining that the patient’s leg was intact because of the amputation.
“When we saw this, we realized we shouldn’t postpone.” [AI systems] As a kind of very rational decision-making method,” Derivan said. He said these systems make assumptions based on “junk statistical science” that create what he called “absurdity.”
In 2018, after Derivan sued the Arkansas Department of Human Services on behalf of these patients over its decision-making process, the state Legislature ruled that the department could not automate home care assignment decisions for patients. was lowered. While Derivan’s system was an early victory in the fight against harm caused by algorithmic decision-making, its use continues across the country in other areas such as employment.
Despite flaws, there are few regulations to curb AI adoption.
There are few laws restricting the use of AI, especially in making critical decisions that can impact a person’s quality of life, and liability for those harmed by automated decisions. I have very few means.
Research conducted by
consumer report
A study released in July found that a majority of Americans are “uncomfortable with the use of AI and algorithmic decision-making technologies in key life moments related to housing, employment, and health care.” “I’m there.” Respondents said they are concerned about not knowing what information AI systems use to make assessments.
Unlike in Louis’ case, people are often not informed when algorithms make decisions about their lives, making it difficult to challenge or challenge those decisions.
“The existing laws we have in place may be helpful, but they can only provide so much,” Derivan said. “Market forces don’t work when it comes to poor people. All the incentives are basically to create worse technology, and there’s no incentive for companies to create better options for low-income people.”
Federal regulators under President Joe Biden’s administration have made several attempts to keep up with the rapidly evolving AI industry. The President issued an executive order containing a framework aimed at partially addressing discrimination-related risks in national security and AI systems. But Donald Trump has vowed to roll back those efforts and cut regulations, including Biden’s executive order on AI.
So lawsuits like Louis’ may become an even more important tool in holding AI accountable. Already in litigation
attracted interest
It is an agency of the U.S. Department of Justice and the Department of Housing and Urban Development, both of which deal with discriminatory housing policies that affect protected classes.
“To the extent that this is a landmark case, it has the potential to provide a roadmap for how to consider these cases and encourage other agendas,” Kaplan said.
Still, without regulation, Derivan said it would be difficult to hold these companies accountable. Because litigation is time-consuming and expensive, companies may find workarounds or ways to build similar products for people who are not subject to class action lawsuits. “You can’t bring in these types of cases every day,” he said.
Source: www.theguardian.com