OpenAI Video App Faces Backlash Over Violent and Racist Content: Sora Claims “Guardrails Are Not Real”

On Tuesday, OpenAI unveiled its latest version of AI-driven video generators, incorporating a social feed that enables users to share lifelike videos.

However, mere hours after Sora 2’s release, many videos shared on feeds and older social platforms depicted copyrighted characters in troubling contexts, featuring graphic violence and racist scenes. Sora’s usage of OpenAI’s services and ChatGPT for image or text generation explicitly bans content that “promotes violence” or otherwise “causes harm.”

According to prompts and clips reviewed by the Guardian, Sora generated several videos illustrating the horrors of bombings and mass shootings, with panicked individuals fleeing university campuses and crowded locations like Grand Central Station in New York. Other prompts created scenes reminiscent of war zones in Gaza and Myanmar, where AI-generated children described their homes being torched. One video, labeled as “Ethiopian Footage Civil War News Style,” showcased a bulletproof-vested reporter speaking into a microphone about government and rebel gunfire in civilian areas. Another clip, prompted by “Charlottesville Rally,” depicted Black protesters in gas masks, helmets, and goggles screaming in distress.

Currently, video generators are only accessible through invitations and have not been released to the public. Yet, within three days of a restricted debut, it skyrocketed to the top of Apple’s App Store, surpassing even OpenAI’s own ChatGPT.

“So far, it’s been amazing to witness what collective human creativity can achieve,” stated Sora’s director Bill Peebles in a Friday post on X. “We will be sending out more invitation codes soon, I assure you!”

The SORA app provides a glimpse into a future where distinguishing truth from fiction may become increasingly challenging. Researchers in misinformation warn that such realistic content could obscure reality and create scenarios wherein these AI-generated videos may be employed for fraud, harassment, and extortion.

“It doesn’t hold to historical truth and is far removed from reality,” remarked Joan Donovan, an assistant professor at Boston University focusing on media manipulation and misinformation. “When malicious individuals gain access to these tools, they use them for hate, harassment, and incitement.”

Slop Engine or “ChatGPT for Creativity”?

OpenAI CEO Sam Altman described the launch of Sora 2 as “truly remarkable,” and in a blog post, stated it “feels like a ‘chat for creativity’ moment for many of us, embodying a sense of fun and novelty.”

Altman acknowledged the addictive tendencies of social media linked to bullying, noting that AI video generation can lead to what is known as “slops,” producing repetitive, low-quality videos that might overwhelm the platform.

“The team was very careful and considerate in trying to create an enjoyable product that avoids falling into that pitfall,” Altman wrote. He stated that OpenAI has taken steps to prevent misuse of someone’s likeness and to guard against illegal content. For instance, the app declined to generate a video featuring Donald Trump and Vladimir Putin sharing cotton candy.

Nonetheless, within the three days following SORA’s launch, numerous videos had already disseminated online. Washington Post reporter Drew Harwell created a video depicting Altman as a military leader in World War II and also produced a video featuring “Ragebait, fake crime, women splattered on white geese.”

Sora’s feeds include numerous videos featuring copyrighted characters from series such as Spongebob Squarepants, South Park, and Rick and Morty. The app seamlessly generated videos of Pikachu imposing tariffs in China, pilfering roses from the White House Rose Garden, and partaking in a Black Lives Matter protest alongside SpongeBob. One video documented by 404 Media showed SpongeBob dressed as Adolf Hitler.

Neither Paramount, Warner Bros, nor Pokémon Co responded to requests for comment.

David Karpf, an associate professor at George Washington University’s Media Affiliated Fairs School, noted he observed a video featuring copyrighted characters promoting cryptocurrency fraud, asserting that OpenAI’s safety measures regarding SORA are evident.

Skip past newsletter promotions

“Guardrails aren’t effective when individuals construct copyrighted characters that foster fraudulent schemes,” stated Karpf. “In 2022, tech companies made significant efforts to hire content moderators; however, in 2025, it appears they have chosen to disregard these responsibilities.”

Just before the release of SORA 2, OpenAI contacted talent agencies and studios to inform them they could opt-out if they wished to prevent the replication of their copyrighted materials by video generators. The Wall Street Journal reports.

OpenAI informed the Guardian that content owners can report copyright violations through the “copyright dispute form,” but individual artists and studios cannot opt-out comprehensively. Varun Shetty, OpenAI’s Head of Media Partnerships, commented:

Emily Bender, a professor at the University of Washington and author of the book “The AI Con,” expressed that Sora creates a perilous environment where “distinguishing reliable sources is challenging, and trust wanes once one is found.”

“Whether they generate text, images, or videos, synthetic media machines represent a tragic facet of the information ecosystem,” the vendor observed. “Their output interacts with technological and social structures in ways that weaken and erode trust.”

Nick Robbins contributed to this report

Source: www.theguardian.com

Former School Athletic Director Sentenced to Four Months in Prison for Racist Deep Fark Recordings

The former athletic director, charged with using artificial intelligence to generate racist and anti-Semitic audio clips, also allegedly impersonated the school’s principal, as indicated by prosecutors.

Dazon Darien, 32, the former director, pleaded guilty to a misdemeanor and a disturbing charge related to school activities, according to the Baltimore County State Law Office. He previously faced additional allegations including theft, stalking, and witness retaliation.

As reported by the Associated Press, Darien pleaded guilty to charges of intrusive school management while maintaining innocence under an Alford plea.

Darien, who previously served as athletic director at Pikesville High School, produced an audio clip containing derogatory comments about “ungrateful black kids” and light jabs at Jewish students. Police records revealed that the audio aimed to discredit the school principal, Eric Eiswart.

In a statement of fact, Eiswart mentioned having “discussions” with Darien regarding his contract renewal, citing “poor performance, inadequate procedures, and reluctance to follow the chain of command” as concerns. Darien’s troubles began in late 2023, which led to the audio’s release, according to the statement.

The attorneys representing Darien did not return calls or messages on Tuesday. The Baltimore County Public Schools District declined to comment on the situation, and attempts to reach Mr. Eiswart on Tuesday were unsuccessful.

Following his sentencing, Darien was returned to federal custody to address additional charges related to the exploitation of children and possession of child pornography.

Manufactured recordings shared on Instagram in January 2024 quickly circulated, impacting Baltimore County Public Schools, which serves over 100,000 students. Eiswart, who withheld comment during the investigation, has received multiple threats to his safety, according to police. He has also been placed on administrative leave by the school district.

Police records indicated that Darien expressed dissatisfaction with Eiswart in December after the principal initiated an investigation into him. It was revealed that Darien had allowed roommates to pay the district $1,916, falsely claiming the roommate was an assistant coach for the Pikesville Girls’ soccer team.

Shortly thereafter, police reported that Darien utilized the district’s internet services to explore artificial intelligence tools, including OpenAI, the creators of ChatGPT chatbots, and Microsoft’s Bing Chat.

(The New York Times filed a lawsuit against OpenAI and Microsoft in December 2023 for copyright infringement concerning news content related to AI systems.)

Creating realistic, manufactured videos, often referred to as deepfakes, has become increasingly simple. Previously, one required sophisticated software, but now many of these tools are available through smartphone apps, raising concerns among AI researchers regarding the potential dangers posed by this technology.

Source: www.nytimes.com

Experts Warn X’s New AI Software Enables Racist Abuse Online: It’s Only the Beginning

Experts in online abuse have warned that the increase in online racism due to fake images is just the beginning of the problems that may arise following a recent update of X Company’s AI software.

Concerns were first raised in December last year when numerous computer-generated images produced by Company X’s generative AI chatbot Grok were leaked on social media platforms.

Signify, an organization that collaborates with leading sports bodies and clubs to monitor and report instances of online hate, has noted a rise in abuse reports since the latest update of Grok, warning that this type of behavior is likely to become more widespread with the introduction of AI.

Elaborating on the issue, a spokesperson stated that the current problem is only the tip of the iceberg and is expected to worsen significantly in the next year.

Grok, introduced by Elon Musk in 2023, recently launched a new feature called Aurora, which enables users to create photorealistic AI images based on simple prompts.

Reports indicate that the latest Grok update is being misused to generate photo-realistic racist images of various soccer players and coaches, sparking widespread condemnation.

The Center for Countering Digital Hate (CCDH) expressed concerns about X’s role in promoting hate speech through revenue-sharing mechanisms, facilitated by AI-generated imagery.

The absence of stringent restrictions on user requests and the ease of circumventing AI guidelines are among the key issues highlighted, with Grok producing a significant number of hateful prompts without appropriate safeguards.

In response to the alarming trend, the Premier League has taken steps to combat racist abuse directed towards athletes, with measures in place to identify and report such incidents, potentially leading to legal action.

Both X and Grok have been approached for comment regarding the situation.

Source: www.theguardian.com

AI chatbot continues to perpetuate racist stereotypes despite anti-racism training

Hundreds of millions of people are already using commercial AI chatbots

Ju Jae-young/Shutterstock

Commercial AI chatbot displays racial bias against African-American English speakers despite outwardly expressing positive sentiments toward African-Americans. This hidden bias can influence the AI’s decisions about a person’s employment eligibility and criminality.

“We discovered some kind of hidden racism. [large language models] It is caused solely by dialect characteristics and causes great harm to the affected groups.” Valentin Hoffman at the Allen AI Institute, a nonprofit research institute in Washington state. social media posts. “For example, GPT-4 is more likely to be sentenced to death if the defendant speaks African American English.”

Hoffman and his colleagues found that more than a dozen versions of large-scale language models, such as OpenAI’s GPT-4 and GPT-3.5, which power commercial chatbots already used by hundreds of millions of people, do not contain such hidden biases. I discovered that there is. OpenAI did not respond to a request for comment.

The researchers first gave the AI text in either African American English or standard American English style, then asked the model to comment on the author of the text. The model characterized African American English speakers using terms associated with negative stereotypes. In the case of GPT-4, they are described as “suspicious,” “aggressive,” “loud,” “rude,” and “ignorant.”

However, when asked to comment about African Americans in general, language models typically use more positive words such as “passionate,” “intelligent,” “ambitious,” “artistic,” and “brilliant.” This suggests that the model’s racial bias is usually hidden within what researchers describe as superficial displays of positive emotion.

The researchers also showed how hidden biases influence people’s judgments of chatbots in a hypothetical scenario. When asked to associate African-American English speakers with jobs, the AI was less likely to associate African-American English speakers with jobs than standard American English speakers. When AI matched jobs, they tended to assign roles that didn’t require a college degree or were related to music and entertainment. AI could also convict an African American English speaker accused of an unspecified crime and give the death penalty to an African American English speaker convicted of first-degree murder. It was highly sexual.

The researchers even showed that large AI systems showed more hidden bias against African American English speakers than smaller models. This reflects previous research showing that large AI training datasets can produce even more racist output.

This experiment raises serious questions about the effectiveness of AI safety training. In AI safety training, large-scale language models receive human feedback, adjust responses, and eliminate issues such as bias. It says such training could reduce ostensibly overt signs of racial bias without eliminating “hidden bias when identity terms are not mentioned.” Yong Jian Shin from Brown University in Rhode Island was not involved in the study. “This highlights the limitations of current safety assessments of large-scale language models by companies before they are released to the public,” he says.

topic:

Source: www.newscientist.com