AI Could Intensify Racism and Sexism in Australia, Warns Human Rights Commissioner

Concerns have been raised that AI could exacerbate racism and sexism in Australia, as human rights commissioners expressed during internal discussions within the Labor party regarding new technologies.

Lorraine Finlay cautioned that while seeking productivity gains from AI is important, it should not come at the cost of discrimination if the technology remains unregulated.

Finlay’s remarks came after worker Sen. Michel Ananda Raja advocated for the “liberation” of Australian data to tech companies, noting that AI often reflects and perpetuates biases from abroad while shaping local culture.

Ananda Raja opposes a dedicated AI law but emphasizes that content creators ought to be compensated for their contributions.

Sign up: AU Breaking NewsEmail

Discussions about enhancing productivity through AI are scheduled for the upcoming federal economic summit, as unions and industry groups voice concerns over copyright and privacy issues.

Media and Arts organizations have raised alarms about the “ramping theft” of intellectual property if large tech corporations gain access to content for training AI systems.

Finlay noted the challenges of identifying embedded biases due to a lack of clarity regarding the datasets used by AI tools.

“Algorithmic bias means that discrimination and inequality are inherent in the tools we utilize, leading to outcomes that reflect these biases,” she stated.




Lorraine Finlay, Human Rights Commissioner. Photo: Mick Tsikas/AAP

“The combination of algorithmic and automation biases leads individuals to rely more on machine decisions and potentially disregard their own judgment,” Finlay remarked.

The Human Rights Commission has consistently supported an AI Act that would enhance existing legislation, including privacy laws, and ensure comprehensive testing for bias in AI tools. Finlay urged the government to quickly establish new regulations.

“Bias tests and audits, along with careful human oversight, are essential,” she added.


Evidence of bias in AI technologies is increasingly reported in fields like healthcare and workforce recruitment in Australia and worldwide.

A recent survey in Australia revealed that job applicants interviewed by AI recruiters faced potential discrimination if they had accents or disabilities.

Ananda Raja, a vocal proponent for AI development, noted the risks of training AI systems using exclusively Australian data, as well as the concerns of amplifying foreign biases.

While the government prioritizes intellectual property protection, she cautioned against limiting domestic data access, warning that Australia would be reliant on overseas AI models without adequate oversight.

“AI requires a vast array of data from diverse populations to avoid reinforcing biases and harming those it aims to assist,” Ananda Raja emphasized.

“We must liberate our data to better train our models, ensuring they authentically represent us.”

Skip past newsletter promotions

“I am eager to support content creators while freeing up data, aiming for an alternative to foreign exploitation of resources,” Ananda Raja stated.

She cited AI screening tools for skin cancer as examples where algorithmic bias has been documented. To combat bias and discrimination affecting specific patients, it is essential to train these models on diverse datasets to protect sensitive information.


Finlay emphasized that any release of Australian data needs to be handled fairly, but she feels the emphasis should be on establishing appropriate regulations.

“It’s certainly beneficial to have diverse and representative data… but that is merely part of the solution,” she clarified.

“We must ensure that this technology is equitable and is implemented in a manner that recognizes and values human contributions.”

Judith Bishop, an AI expert at La Trobe University and former data researcher at an AI firm, asserted that increasing the availability of local data will enhance the effectiveness of AI tools.

“It is crucial to recognize that systems developed in different contexts can be relevant, as the [Australian] population should not exclusively depend on US data models,” Bishop stated.

eSafety Commissioner Julie Inman Grant has also voiced concerns regarding the lack of transparency related to the data applied by AI technologies.

In her statement, she urged tech companies to be transparent about their training datasets, develop robust reporting mechanisms, and utilize diverse, accurate, and representative data for their products.

“The opacity surrounding generative AI’s development and deployment poses significant issues,” Inman Grant remarked. “This raises critical concerns about the potential for large language models (LLMs) to amplify harmful biases, including restrictive or detrimental gender norms and racial prejudices.”

“Given that a handful of companies dominate the development of these systems, there is a significant risk that certain perspectives, voices, and evidence could become suppressed or overlooked in the generated outputs.”


Source: www.theguardian.com

Commissioner Advocates for Ban on Apps Creating Deepfake Nude Images of Children

The “nudifice” app utilizing artificial intelligence to generate explicit sexual images of children is raising alarms, echoing concerns from English children’s commissioners amidst rising fears for potential victims.

Girls have reported refraining from sharing images of themselves on social media due to fears that generative AI tools could alter or sexualize their clothing. Although creating or disseminating sexually explicit images of children is illegal, the underlying technology remains legal, according to the report.

“Children express fear at the mere existence of this technology. They worry strangers, classmates, or even friends might exploit smartphones to manipulate them, using these specialized apps to create nude images,” a spokesperson stated.

“While the online landscape is innovative and continuously evolving, there’s no justifiable reason for these specific applications to exist. They have no rightful place in our society, and tools that enable the creation of naked images of children using deepfake technology should be illegal.”

De Souza has proposed an AI bill mandating that developers of generative AI tools address product functionalities, and has urged the government to implement an effective system for eliminating explicit deepfake images of children. This initiative should be supported by policy measures recognizing deep sexual abuse as a form of violence against women and girls.

Meanwhile, the report calls on Ofcom to ensure diligent age verification of nudification apps, and for social media platforms to restrict access to sexually explicit deepfake tools targeted at children, in accordance with online safety laws.

The findings revealed that 26% of respondents aged 13 to 18 had encountered deep, sexually explicit images of celebrities, friends, teachers, or themselves.

Many AI tools reportedly focus solely on female bodies, thereby contributing to an escalating culture of misogyny, the report cautions.

An 18-year-old girl conveyed to the commissioner:

The report highlighted cases like that of Mia Janin, who tragically died by suicide in March 2021, illustrating connections between deepfake abuse, suicidal thoughts, and PTSD.

In her report, De Souza stated that new technologies confront children with concepts they struggle to comprehend, evolving at a pace that overwhelms their ability to recognize the associated hazards.

The lawyer explained to the Guardian that this reflects a lack of understanding regarding the repercussions of actions taken by young individuals arrested for sexual offenses, particularly concerning deepfake experimentation.

Daniel Reese Greenhalgh, a partner at Cokerbinning law firm, noted that the existing legal framework poses significant challenges for law enforcement agencies in identifying and protecting abuse victims.

She indicated that banning such apps might ignite debates over internet freedom and could disproportionately impact young men experimenting with AI software without comprehension of the consequences.

Reece-Greenhalgh remarked that while the criminal justice system strives to treat adolescent offenses with understanding, previous efforts to mitigate criminality among youth have faced challenges when offenses occur in private settings, leading to unintended consequences within schools and communities.

Matt Hardcastle, a partner at Kingsley Napley, emphasized the “online youth minefield” surrounding access to illegal sexual and violent content, noting that many parents are unaware of how easily their children can encounter situations that lead to harmful experiences.

“Parents often view these situations from their children’s perspectives, unaware that their actions can be both illegal and detrimental to themselves or others,” he stated. “Children’s brains are still developing, leading them to approach risk-taking very differently.”

Marcus Johnston, a criminal lawyer focusing on sex crimes, reported working with an increasingly youthful demographic involved in such crimes, often without parental awareness of the issues at play. “Typically, these offenders are young men, seldom young women, ensnared indoors, while parents mistakenly perceive their activities as mere games,” he explained. “These offenses have emerged largely due to the internet, with most sexual crimes now taking place online, spearheaded by forums designed to cultivate criminal behavior in children.”

A government spokesperson stated:

“It is appallingly illegal to create, possess, or distribute child sexual abuse material, including AI-generated images. Platforms of all sizes must remove this content or face significant fines as per online safety laws. The UK is pioneering the introduction of AI-specific child sexual abuse offenses, making it illegal to own, create, or distribute tools crafted for generating abhorrent child sexual abuse material.”

  • In the UK, the NSPCC offers support to children at 0800 1111 and adults concerned about children can reach out at 0808 800 5000. The National Association of People Abused in Childhood (NAPAC) supports adult survivors at 0808 801 0331. In Australia, children, young adults, parents, and educators can contact the 1800 55 1800 helpline for children, or Braveheart at 1800 272 831. Adult survivors may reach the Blue Knot Foundation at 1300 657 380.

Source: www.theguardian.com

Parents face difficult decisions regarding smartphones, says English Children’s Commissioner

Parents in England are urged to make tough decisions about their children’s smartphone use rather than trying to be their friends, according to Dame Rachel de Souza. She emphasized the importance of setting boundaries and considering examples of responsible phone use.

Speaking to the Sunday Times, de Souza stressed the need for parents to prioritize their children’s well-being. She advised against giving in to children’s demands for more screen time, highlighting the importance of making tough decisions for their long-term benefit.

She added that parents should provide love, understanding, support, and boundaries, encouraging high aspirations while also setting limits. A recent survey suggested that a quarter of children in the UK spend over four hours a day on internet-enabled devices.

De Souza also emphasized the importance of having open conversations with children about their online activities and monitoring the content they are exposed to. Education Secretary Bridget Phillipson is considering implementing smartphone bans in some schools to address concerns about the impact of social media on children.

While guidelines currently suggest banning phones during lessons, there is no clear enforcement strategy for breaks and lunches. De Souza’s survey of state schools found that the majority already limit mobile phone use during the day.

She believes that schools play a role in addressing these issues but acknowledges that parents must also take responsibility for monitoring their children’s digital activities. Conservative leader Kemi Badenok has questioned the government’s stance on child well-being and school bills related to phone bans.

Overall, there is growing awareness of the need to balance children’s online activities with real-world interactions and boundaries to ensure their well-being.

Source: www.theguardian.com