Unlocking Genetic Data: The Risks of Polygenic Risk Scores
Genetic data can be analyzed to estimate the risk of developing specific health conditions. Science Photo Library / Alamy
Polygenic risk scores (PRS) summarize an individual’s likelihood of developing particular health conditions, revealing insights into a person’s DNA through advanced mathematical methods. These scores could potentially be leveraged by health insurance companies to reconstruct genetic data from summary genomic reports, uncovering health risks that patients might not disclose. Furthermore, individuals sharing their scores anonymously could be identified by extracting genetic data and querying public genealogy databases.
Understanding Polygenic Risk Scores
Polygenic risk scores measure the impact of variations in tens to thousands of specific letters in the genome, known as single nucleotide polymorphisms (SNPs). Researchers and DNA testing companies like 23andMe use these scores to summarize potential health risks, which may also be made public by individuals seeking advice on score interpretation.
Solve a polygenic risk score is akin to deducing a phone number, only knowing that the digits sum up to a specific number, illustrating a mathematical challenge known as the knapsack problem. This complexity makes PRS considered to have a low privacy risk.
However, each SNP value in the score is multiplied by a highly precise weight—up to 16 orders of magnitude—reflecting its contribution to overall disease risk. This makes even low-risk models vulnerable to data attacks.
Research Findings on Genetic Risk Scores
According to Gamze Gyursoy at Columbia University, “The final polygenic risk score can be estimated with a high degree of accuracy because it is constrained by the finite methodology used to reach that figure and the statistically probable arrangement of the underlying SNPs.” Gyursoy, alongside Kiril Nikitin, also from Columbia, conducted experiments using 298 polygenic risk models based on data from 2,353 individuals. They worked backwards to calculate all possible genomes that could generate each score while excluding those with numerous rare mutations.
As a result, they were able to reconstruct donor genotypes with an impressive 94.6% accuracy and accurately predicted 2,450 SNPs per person. Testing revealed that just 27 SNPs were sufficient to identify an individual from a pool of 500,000 samples, with up to 90% accuracy in predicting family relationships. Interestingly, individuals of African and East Asian descent were easier to identify, largely due to underrepresentation in available genetic databases.
Mitigating Risks and Ethical Considerations
Gyursoy highlights that 447 small, high-precision models in the public database of polygenic scores are susceptible to such attacks. “I wanted to emphasize that the risk is low; however, [certain conditions] still present the potential for data leakage, which must be considered in study planning, especially when involving vulnerable populations,” Gyursoy states.
Researchers at Massachusetts General Hospital believe existing data protection methods and computational barriers limit the potential misuse of polygenic risk scores. “These findings serve as a crucial reminder that small models should be treated as sensitive data in clinical reporting and informed consent discussions,” they add.
TThree hundred and twenty-four. That was the score Mary Louie was given by an AI-powered tenant screening tool. In its 11-page report, the software SafeRent does not explain how the score was calculated or how various factors were taken into account. There is no mention of what the score actually means. They just saw Louis’ numbers and decided it was too low. In the box next to the result, the report said “Score Recommended: DECLINE.”
Louis, who works as a security guard, had applied for an apartment in a suburban area in eastern Massachusetts. When she toured the room, the management company said there was no problem for her application to be accepted. Although she had bad credit and credit card debt, she had an excellent recommendation from her landlord of 17 years, who paid her rent on time. She also plans to use vouchers for low-income renters and ensure that management companies receive at least a portion of their monthly rent payments from the government. Her son, whose name was also on the voucher, also had a high credit score, indicating it could act as a backstop in case of missed payments.
But in May 2021, more than two months after she applied for the apartment, the management company sent Louis an email informing her that the computer program had rejected her application. Applications needed a score of at least 443 to be accepted. There was no further explanation and no way to appeal the decision.
“Mary, we regret to inform you that your housing offer has been denied due to a third-party service we use to screen all prospective housing applicants,” the email said. I did. “Unfortunately, the SafeRent tenant score for this service was lower than what our tenant standards would allow.”
tenant files suit
Louis ended up renting a more expensive apartment. Management there did not grade her based on an algorithm. But she learned that her experience at Saferent was not unique. She is one of more than 400 Black and Hispanic tenants on Housing Vouchers in Massachusetts who said their rental applications were rejected because of their safe rent scores.
In 2022, they banded together to sue SafeRent under the Fair Housing Act, alleging that it discriminated against them. Lewis and another named plaintiff, Monica Douglas, said the company’s algorithm unfairly scores Black and Hispanic renters using Housing Vouchers over white applicants. he claimed. They found that the software inaccurately assessed irrelevant account information (credit score, non-housing-related debt) on whether a tenant was a good tenant, but did not take into account whether they would use a housing voucher. he claimed. Research shows that black and Hispanic rent-seekers have lower credit scores and are more likely to use housing vouchers than white applicants.
“It was a waste of time waiting to be turned down,” Lewis said. “I knew my credit was bad, but AI doesn’t know what I do. It knew I was late on my credit card payments, I didn’t know I was paying.”
Two years have passed since the group first sued for safe rent. Lewis, who was one of two named plaintiffs, said she has moved on with her life and has largely forgotten about the lawsuit. But her action could protect other renters in a similar housing program, known as Section 8 vouchers, from losing their homes because of scores determined by algorithms.
Saferent settled with Mr. Lewis and Mr. Douglas. In addition to paying $2.3 million, the company agreed to stop using the scoring system or make some sort of recommendation to prospective tenants who used housing vouchers for five years. Although Saferent legally does not admit wrongdoing, it is unusual for a tech company to accept changes to its core product as part of a settlement. A more common outcome of such agreements is financial agreements.
“While SafeRent continues to believe that SRS scores comply with all applicable laws, litigation is time-consuming and costly,” company spokeswoman Yazmin Lopez said in a statement. “Defending SRS scores in this case would be a waste of time and resources that could be better used by SafeRent to fulfill its core mission of providing housing providers with the tools they need to screen applicants. It has become increasingly clear.”
New AI landlord
Tenant screening systems like SafeRent are often used as a way to “avoid” direct interaction with prospective tenants and shift responsibility for refusals to computer systems, said Louie and the plaintiffs in the lawsuit. said Todd Kaplan, one of the attorneys representing the company. company.
The property management company told Louis that it decided to deny her based solely on the software, but the SafeRent report says it did not set the criteria for what the score needed to be for an application to be accepted. was a management company.
Still, even for those involved in the application process, how the algorithm works is opaque. The property manager who showed Louis the apartment said he didn’t know why Louis was having trouble renting the apartment.
“They’re inputting a lot of information, and SafeRent is coming up with its own scoring system,” Kaplan said. “It becomes difficult to predict how people will see themselves on Safe Rent. Not only applying tenants, but even landlords, don’t know the details of Safe Rent scores.”
As part of Louie’s settlement with SafeRent, approved Nov. 20, the company will not use a scoring system or recommend accepting or rejecting tenants if they are using housing vouchers. I can no longer do that. If the company devises a new scoring system, it is required to have it independently verified by a third-party fair housing organization.
“By removing the thumbs up and down, tenants can really say, ‘I’m a great tenant,'” Kaplan said. “It allows for more personal decisions.”
One study found that nearly all of the 92 million people in the U.S. who are considered low-income are exposed to AI decisions in basic areas of their lives such as employment, housing, health care, education, and government assistance. It is said that there is New report on the harms of AI
By Kevin de Liban, a lawyer who represented low-income people as a member of the Legal Aid Society. Founder of a new AI justice organization called tectonic justice
Derivan began researching these systems in 2016, when he discovered that automated decision-making that reduced human input suddenly left state-funded home care out of reach for extended periods of time. This happened when I received a consultation from some patients. In one case, the state’s Medicaid payments depended on the program determining that the patient’s leg was intact because of the amputation.
“When we saw this, we realized we shouldn’t postpone.” [AI systems] As a kind of very rational decision-making method,” Derivan said. He said these systems make assumptions based on “junk statistical science” that create what he called “absurdity.”
In 2018, after Derivan sued the Arkansas Department of Human Services on behalf of these patients over its decision-making process, the state Legislature ruled that the department could not automate home care assignment decisions for patients. was lowered. While Derivan’s system was an early victory in the fight against harm caused by algorithmic decision-making, its use continues across the country in other areas such as employment.
Despite flaws, there are few regulations to curb AI adoption.
There are few laws restricting the use of AI, especially in making critical decisions that can impact a person’s quality of life, and liability for those harmed by automated decisions. I have very few means.
Research conducted by consumer report
A study released in July found that a majority of Americans are “uncomfortable with the use of AI and algorithmic decision-making technologies in key life moments related to housing, employment, and health care.” “I’m there.” Respondents said they are concerned about not knowing what information AI systems use to make assessments.
Unlike in Louis’ case, people are often not informed when algorithms make decisions about their lives, making it difficult to challenge or challenge those decisions.
“The existing laws we have in place may be helpful, but they can only provide so much,” Derivan said. “Market forces don’t work when it comes to poor people. All the incentives are basically to create worse technology, and there’s no incentive for companies to create better options for low-income people.”
Federal regulators under President Joe Biden’s administration have made several attempts to keep up with the rapidly evolving AI industry. The President issued an executive order containing a framework aimed at partially addressing discrimination-related risks in national security and AI systems. But Donald Trump has vowed to roll back those efforts and cut regulations, including Biden’s executive order on AI.
So lawsuits like Louis’ may become an even more important tool in holding AI accountable. Already in litigation attracted interest
It is an agency of the U.S. Department of Justice and the Department of Housing and Urban Development, both of which deal with discriminatory housing policies that affect protected classes.
“To the extent that this is a landmark case, it has the potential to provide a roadmap for how to consider these cases and encourage other agendas,” Kaplan said.
Still, without regulation, Derivan said it would be difficult to hold these companies accountable. Because litigation is time-consuming and expensive, companies may find workarounds or ways to build similar products for people who are not subject to class action lawsuits. “You can’t bring in these types of cases every day,” he said.
This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.
Strictly Necessary Cookies
Strictly Necessary Cookie should be enabled at all times so that we can save your preferences for cookie settings.