AI Could Intensify Racism and Sexism in Australia, Warns Human Rights Commissioner

Concerns have been raised that AI could exacerbate racism and sexism in Australia, as human rights commissioners expressed during internal discussions within the Labor party regarding new technologies.

Lorraine Finlay cautioned that while seeking productivity gains from AI is important, it should not come at the cost of discrimination if the technology remains unregulated.

Finlay’s remarks came after worker Sen. Michel Ananda Raja advocated for the “liberation” of Australian data to tech companies, noting that AI often reflects and perpetuates biases from abroad while shaping local culture.

Ananda Raja opposes a dedicated AI law but emphasizes that content creators ought to be compensated for their contributions.

Sign up: AU Breaking NewsEmail

Discussions about enhancing productivity through AI are scheduled for the upcoming federal economic summit, as unions and industry groups voice concerns over copyright and privacy issues.

Media and Arts organizations have raised alarms about the “ramping theft” of intellectual property if large tech corporations gain access to content for training AI systems.

Finlay noted the challenges of identifying embedded biases due to a lack of clarity regarding the datasets used by AI tools.

“Algorithmic bias means that discrimination and inequality are inherent in the tools we utilize, leading to outcomes that reflect these biases,” she stated.




Lorraine Finlay, Human Rights Commissioner. Photo: Mick Tsikas/AAP

“The combination of algorithmic and automation biases leads individuals to rely more on machine decisions and potentially disregard their own judgment,” Finlay remarked.

The Human Rights Commission has consistently supported an AI Act that would enhance existing legislation, including privacy laws, and ensure comprehensive testing for bias in AI tools. Finlay urged the government to quickly establish new regulations.

“Bias tests and audits, along with careful human oversight, are essential,” she added.


Evidence of bias in AI technologies is increasingly reported in fields like healthcare and workforce recruitment in Australia and worldwide.

A recent survey in Australia revealed that job applicants interviewed by AI recruiters faced potential discrimination if they had accents or disabilities.

Ananda Raja, a vocal proponent for AI development, noted the risks of training AI systems using exclusively Australian data, as well as the concerns of amplifying foreign biases.

While the government prioritizes intellectual property protection, she cautioned against limiting domestic data access, warning that Australia would be reliant on overseas AI models without adequate oversight.

“AI requires a vast array of data from diverse populations to avoid reinforcing biases and harming those it aims to assist,” Ananda Raja emphasized.

“We must liberate our data to better train our models, ensuring they authentically represent us.”

Skip past newsletter promotions

“I am eager to support content creators while freeing up data, aiming for an alternative to foreign exploitation of resources,” Ananda Raja stated.

She cited AI screening tools for skin cancer as examples where algorithmic bias has been documented. To combat bias and discrimination affecting specific patients, it is essential to train these models on diverse datasets to protect sensitive information.


Finlay emphasized that any release of Australian data needs to be handled fairly, but she feels the emphasis should be on establishing appropriate regulations.

“It’s certainly beneficial to have diverse and representative data… but that is merely part of the solution,” she clarified.

“We must ensure that this technology is equitable and is implemented in a manner that recognizes and values human contributions.”

Judith Bishop, an AI expert at La Trobe University and former data researcher at an AI firm, asserted that increasing the availability of local data will enhance the effectiveness of AI tools.

“It is crucial to recognize that systems developed in different contexts can be relevant, as the [Australian] population should not exclusively depend on US data models,” Bishop stated.

eSafety Commissioner Julie Inman Grant has also voiced concerns regarding the lack of transparency related to the data applied by AI technologies.

In her statement, she urged tech companies to be transparent about their training datasets, develop robust reporting mechanisms, and utilize diverse, accurate, and representative data for their products.

“The opacity surrounding generative AI’s development and deployment poses significant issues,” Inman Grant remarked. “This raises critical concerns about the potential for large language models (LLMs) to amplify harmful biases, including restrictive or detrimental gender norms and racial prejudices.”

“Given that a handful of companies dominate the development of these systems, there is a significant risk that certain perspectives, voices, and evidence could become suppressed or overlooked in the generated outputs.”


Source: www.theguardian.com

Meta’s algorithms prioritize feeding blank accounts on Facebook and Instagram, revealing underlying sexism and misogyny.

HTo find out how Facebook and Instagram's algorithms influence what appears in your news feed, Guardian Australia tested them on a completely blank smartphone linked to an unused email address.

Three months later, without any input, it was full of sexist and misogynistic content.

The Guardian Australia's explore page for dummy Instagram accounts set up in April. Photo: Instagram

The John Doe profile was created in April as a typical 24-year-old male. Facebook was able to collect other information about us, such as our phone type and Melbourne location, but because we had opted out of ad tracking, Facebook couldn't know what we did outside the app.

Facebook left me with little to fall back on, with no likes, comments or accounts added as friends, while Instagram requires users to first follow at least five accounts, so I chose popular suggested accounts, such as the Prime Minister and Bec Judd.

Meta says its algorithm ranks content according to people's interests, but we wanted to see what happens in the absence of such input. We scrolled through our feed every two weeks to see what was on offer.

What did we see?

Initially, Facebook showed jokes about The Office and other sitcom-related memes alongside posts from 7 News, the Daily Mail and Ladbible. The next day, it also started showing Star Wars memes and gym and “dudebro” style content.

By the third day, “traditional Catholic” type memes started appearing and the feed veered towards more sexist content.

Three months later, memes from The Office, Star Wars, and The Boys are still appearing in the feed, now interspersed with extremely sexist and misogynistic imagery that appears in the feed with no input from the user.

On Instagram, the explore page is filled with women in skimpy outfits, but the feed is largely innocuous, mostly Melbourne-related content and foodie influencer recommendations.

An example of a misogynistic meme shoved into the feed of a blank Facebook account. Photo: Facebook

Source: www.theguardian.com