The Guardian has uncovered that artificial intelligence systems utilized by the UK government to identify welfare fraud exhibit bias based on individuals’ age, disability, marital status, and nationality.
A review of a machine learning program used to analyze numerous Universal Credit payment claims across the UK revealed that certain groups were mistakenly targeted more frequently than others.
This revelation came from documents published under the Freedom of Information Act by the Department for Work and Pensions (DWP). A “fairness analysis” conducted in February of this year uncovered a significant discrepancy in outcomes within the Universal Credit Advance automated system.
Despite previous claims by the DWP that the AI system had no discrimination concerns, the emergence of this bias raises important questions about its impact on customers.
Concerns have been raised by activists regarding the potential harm caused by the government’s policies and the need for transparency in the use of AI systems.
The DWP has been urged to adopt a more cautious approach and cease the deployment of tools that pose a risk of harm to marginalized groups.
The discovery of disparities in fraud risk assessment by automated systems may lead to increased scrutiny of the government’s use of AI, emphasizing the need for greater transparency.
The UK public sector employs a significant number of automated tools, with only a fraction being officially registered.
The lack of transparency in the use of AI systems by government departments has raised concerns about potential misuse and manipulation by malicious actors.
The DWP has stated that their AI tools do not replace human judgment and that caseworkers evaluate all available information when making decisions related to benefits fraud.
Source: www.theguardian.com