Concerns have been raised that AI could exacerbate racism and sexism in Australia, as human rights commissioners expressed during internal discussions within the Labor party regarding new technologies.
Lorraine Finlay cautioned that while seeking productivity gains from AI is important, it should not come at the cost of discrimination if the technology remains unregulated.
Finlay’s remarks came after worker Sen. Michel Ananda Raja advocated for the “liberation” of Australian data to tech companies, noting that AI often reflects and perpetuates biases from abroad while shaping local culture.
Ananda Raja opposes a dedicated AI law but emphasizes that content creators ought to be compensated for their contributions.
Sign up: AU Breaking NewsEmail
Discussions about enhancing productivity through AI are scheduled for the upcoming federal economic summit, as unions and industry groups voice concerns over copyright and privacy issues.
Media and Arts organizations have raised alarms about the “ramping theft” of intellectual property if large tech corporations gain access to content for training AI systems.
Finlay noted the challenges of identifying embedded biases due to a lack of clarity regarding the datasets used by AI tools.
“Algorithmic bias means that discrimination and inequality are inherent in the tools we utilize, leading to outcomes that reflect these biases,” she stated.
Lorraine Finlay, Human Rights Commissioner. Photo: Mick Tsikas/AAP
“The combination of algorithmic and automation biases leads individuals to rely more on machine decisions and potentially disregard their own judgment,” Finlay remarked.
The Human Rights Commission has consistently supported an AI Act that would enhance existing legislation, including privacy laws, and ensure comprehensive testing for bias in AI tools. Finlay urged the government to quickly establish new regulations.
“Bias tests and audits, along with careful human oversight, are essential,” she added.
Evidence of bias in AI technologies is increasingly reported in fields like healthcare and workforce recruitment in Australia and worldwide.
A recent survey in Australia revealed that job applicants interviewed by AI recruiters faced potential discrimination if they had accents or disabilities.
Ananda Raja, a vocal proponent for AI development, noted the risks of training AI systems using exclusively Australian data, as well as the concerns of amplifying foreign biases.
While the government prioritizes intellectual property protection, she cautioned against limiting domestic data access, warning that Australia would be reliant on overseas AI models without adequate oversight.
“AI requires a vast array of data from diverse populations to avoid reinforcing biases and harming those it aims to assist,” Ananda Raja emphasized.
“We must liberate our data to better train our models, ensuring they authentically represent us.”
After the newsletter promotion
“I am eager to support content creators while freeing up data, aiming for an alternative to foreign exploitation of resources,” Ananda Raja stated.
She cited AI screening tools for skin cancer as examples where algorithmic bias has been documented. To combat bias and discrimination affecting specific patients, it is essential to train these models on diverse datasets to protect sensitive information.
Finlay emphasized that any release of Australian data needs to be handled fairly, but she feels the emphasis should be on establishing appropriate regulations.
“It’s certainly beneficial to have diverse and representative data… but that is merely part of the solution,” she clarified.
“We must ensure that this technology is equitable and is implemented in a manner that recognizes and values human contributions.”
Judith Bishop, an AI expert at La Trobe University and former data researcher at an AI firm, asserted that increasing the availability of local data will enhance the effectiveness of AI tools.
“It is crucial to recognize that systems developed in different contexts can be relevant, as the [Australian] population should not exclusively depend on US data models,” Bishop stated.
eSafety Commissioner Julie Inman Grant has also voiced concerns regarding the lack of transparency related to the data applied by AI technologies.
In her statement, she urged tech companies to be transparent about their training datasets, develop robust reporting mechanisms, and utilize diverse, accurate, and representative data for their products.
“The opacity surrounding generative AI’s development and deployment poses significant issues,” Inman Grant remarked. “This raises critical concerns about the potential for large language models (LLMs) to amplify harmful biases, including restrictive or detrimental gender norms and racial prejudices.”
“Given that a handful of companies dominate the development of these systems, there is a significant risk that certain perspectives, voices, and evidence could become suppressed or overlooked in the generated outputs.”
Source: www.theguardian.com












