Experts in online abuse have warned that the increase in online racism due to fake images is just the beginning of the problems that may arise following a recent update of X Company’s AI software.
Concerns were first raised in December last year when numerous computer-generated images produced by Company X’s generative AI chatbot Grok were leaked on social media platforms.
Signify, an organization that collaborates with leading sports bodies and clubs to monitor and report instances of online hate, has noted a rise in abuse reports since the latest update of Grok, warning that this type of behavior is likely to become more widespread with the introduction of AI.
Elaborating on the issue, a spokesperson stated that the current problem is only the tip of the iceberg and is expected to worsen significantly in the next year.
Grok, introduced by Elon Musk in 2023, recently launched a new feature called Aurora, which enables users to create photorealistic AI images based on simple prompts.
Reports indicate that the latest Grok update is being misused to generate photo-realistic racist images of various soccer players and coaches, sparking widespread condemnation.
The Center for Countering Digital Hate (CCDH) expressed concerns about X’s role in promoting hate speech through revenue-sharing mechanisms, facilitated by AI-generated imagery.
The absence of stringent restrictions on user requests and the ease of circumventing AI guidelines are among the key issues highlighted, with Grok producing a significant number of hateful prompts without appropriate safeguards.
In response to the alarming trend, the Premier League has taken steps to combat racist abuse directed towards athletes, with measures in place to identify and report such incidents, potentially leading to legal action.
Both X and Grok have been approached for comment regarding the situation.
Source: www.theguardian.com