Examining Gender Bias in Facebook’s Job Ads: Insights from France’s Equality Monitoring Regulations

France’s equality regulator has determined that Facebook’s job advertising algorithm is discriminatory towards women, following an investigation that revealed a bias in job ads for mechanics favoring men, while ads for kindergarten teaching positions were predominantly shown to women.

The watchdog group, Défenseur des Droits, contended that Facebook’s targeted job ad system discriminates based on gender, which constitutes indirect discrimination. The regulator advised Facebook and its parent company, Meta, to implement measures to eliminate discriminatory practices in advertising and granted the company three months to inform French authorities of its actions.

According to the regulator’s ruling, “The system implemented for distributing job listings treats Facebook users differently based on their gender, thereby resulting in indirect gender discrimination.”

This ruling followed an initiative from Global Witness, a campaign organization focused on examining the influence of major tech firms on human rights, which posted advertisements on Facebook that included links to various job opportunities across countries like France, the UK, Ireland, and South Africa.

The findings revealed that, notably in France, 90% of individuals seeing ads for mechanic positions were men, whereas the same percentage of those encountering kindergarten teacher ads were women. Additionally, 80% of viewers for psychologist job ads were women, while 70% of those seeing pilot job ads were men.

Global Witness, along with French women’s rights organizations La Fondation des Femmes and Femme Ingénue, which had reached out to the rights group, praised the ruling.

In a joint statement, they remarked, “This seems to be the first instance where a European regulator has ruled that a social media platform’s algorithms exhibit gender discrimination, marking significant progress in holding these platforms accountable under existing legislation.”

“This decision conveys a powerful message to all digital platforms that they will be held responsible for such biases,” stated attorney Josephine Sheffet, representing the plaintiffs. “This legal principle establishes a crucial precedent for future legal actions.”

Mr. Mehta disputed the ruling, with a spokesperson stating: “We disagree with this decision and are exploring our options.”

Meta had agreed to modify Facebook’s algorithms in 2022 after allegations from the U.S. Department of Justice suggested that the platform’s housing advertising system discriminated against users based on criteria like race, religion, and gender.

Source: www.theguardian.com

Reducing Bias, Improving Recruitment: How AI is Revolutionizing Hiring for Small Businesses

Artificial intelligence is trained on human-created content, known as actual intelligence. To train AI to write fiction, novels are used, while job descriptions are used to train AI for writing job specifications. However, a problem arises from this approach. Despite efforts to eliminate biases, humans inherently possess biases, and AI trained on human-created content may adopt these biases. Overcoming bias is a significant challenge for AI.

“Bias is prevalent in hiring and stems from the existing biases in most human-run recruitment processes,” explains Kevin Fitzgerald, managing director of UK-based employment management platform Employment Hero. The platform utilizes AI to streamline recruitment processes and minimize bias. “The biases present in the recruitment team are embedded in the process itself.”

One way AI addresses bias is through tools like SmartMatch offered by Employment Hero. By focusing on candidates’ skills and abilities while omitting demographic information such as gender and age, biases can be reduced. This contrasts with traditional methods like LinkedIn and CVs, which may unintentionally reveal personal details.

AI helps businesses tackle bias when screening for CVs. Photo: Fiordaliso/Getty Images

Another concern is how AI processes information compared to humans. While humans can understand nuances and subtleties, AI may lack this capability and rely on keyword matching. To address this, tools like SmartMatch evaluate a candidate’s entire profile to provide a holistic view and avoid missed opportunities due to lack of nuance.

SmartMatch not only assists in matching candidates with suitable roles but also helps small businesses understand their specific hiring needs. By analyzing previous hires and predicting future staffing requirements, SmartMatch offers a comprehensive approach to recruitment.

Understanding SME needs and employment history allows SmartMatch to introduce you to suitable candidates. Photo: Westend61/Getty Images

By offering candidates the ability to maintain an employment passport, Employment Hero empowers both job seekers and employers. This comprehensive approach to recruitment ensures that both parties benefit from accurate and efficient matches.

For small and medium-sized businesses, the impact of poor hiring decisions can be significant. By utilizing advanced tools like SmartMatch, these businesses can access sophisticated recruitment solutions previously available only to larger companies.

Discover how Employment Hero can revolutionize your recruitment process.

Source: www.theguardian.com

AI system used to detect UK benefits fraud exposed for bias | Universal Credit

The Guardian has uncovered that artificial intelligence systems utilized by the UK government to identify welfare fraud exhibit bias based on individuals’ age, disability, marital status, and nationality.

A review of a machine learning program used to analyze numerous Universal Credit payment claims across the UK revealed that certain groups were mistakenly targeted more frequently than others.

This revelation came from documents published under the Freedom of Information Act by the Department for Work and Pensions (DWP). A “fairness analysis” conducted in February of this year uncovered a significant discrepancy in outcomes within the Universal Credit Advance automated system.

Despite previous claims by the DWP that the AI system had no discrimination concerns, the emergence of this bias raises important questions about its impact on customers.

Concerns have been raised by activists regarding the potential harm caused by the government’s policies and the need for transparency in the use of AI systems.

The DWP has been urged to adopt a more cautious approach and cease the deployment of tools that pose a risk of harm to marginalized groups.

The discovery of disparities in fraud risk assessment by automated systems may lead to increased scrutiny of the government’s use of AI, emphasizing the need for greater transparency.

The UK public sector employs a significant number of automated tools, with only a fraction being officially registered.

The lack of transparency in the use of AI systems by government departments has raised concerns about potential misuse and manipulation by malicious actors.

The DWP has stated that their AI tools do not replace human judgment and that caseworkers evaluate all available information when making decisions related to benefits fraud.

Source: www.theguardian.com

Chatbots in China under scrutiny for potential censorship and bias, say geologists

There is concern among geologists regarding the development of the GeoGPT chatbot, supported by the International Union of Geological Sciences (IUGS). They worry about potential Chinese censorship or bias in the chatbot.

Targeting geoscientists and researchers, especially in the Southern Hemisphere, GeoGPT aims to enhance the understanding of geosciences by utilizing extensive data and research on the Earth’s history spanning billions of years.

This initiative is part of the Deeptime Digital Earth (DDE) program, established in 2019 and primarily funded by China to promote international scientific cooperation and help countries achieve the UN’s Sustainable Development Goals.

One component of GeoGPT’s AI technology is Qwen, a large-scale language model created by Chinese tech company Alibaba. Geologist and computer scientist Professor Paul Cleverley, who tested a pre-release version of the chatbot, highlighted concerns raised in an article in Geoscientist journal.

In response, DDE principals stated that GeoGPT also incorporates another language model, Meta’s Llama, and disputed claims of state censorship, emphasizing the chatbot’s focus on geoscientific information.

Although issues with GeoGPT have been mostly resolved, further enhancements are underway as the system is not yet released to the public. Notably, geoscience data can include commercially valuable information crucial for the green transition.

The potential influence of Chinese narratives on geoscience-related questions raised concerns during testing of Qwen, a component of GeoGPT’s AI, prompting discussions on data transparency and biases.

Future responses of GeoGPT to sensitive queries, especially those with geopolitical implications, remain uncertain pending further development and scrutiny of the chatbot.

Assurances from DDE indicate that GeoGPT will not be subject to censorship from any nation state and users will have the option to select between Qwen and Llama models.

While the development of GeoGPT under international research collaboration adds layers of transparency, concerns persist about the potential filtering of information and strategic implications related to mineral exploration.

As GeoGPT’s database remains under review for governance standards, access to the training data upon public release will be open for scrutiny to ensure accountability and transparency.

Despite the significant funding and logistical support from China, the collaborative nature of the DDE aims to foster scientific discoveries and knowledge sharing for the benefit of global scientific communities.

Source: www.theguardian.com