OpenAI Video App Faces Backlash Over Violent and Racist Content: Sora Claims “Guardrails Are Not Real”

On Tuesday, OpenAI unveiled its latest version of AI-driven video generators, incorporating a social feed that enables users to share lifelike videos.

However, mere hours after Sora 2’s release, many videos shared on feeds and older social platforms depicted copyrighted characters in troubling contexts, featuring graphic violence and racist scenes. Sora’s usage of OpenAI’s services and ChatGPT for image or text generation explicitly bans content that “promotes violence” or otherwise “causes harm.”

According to prompts and clips reviewed by the Guardian, Sora generated several videos illustrating the horrors of bombings and mass shootings, with panicked individuals fleeing university campuses and crowded locations like Grand Central Station in New York. Other prompts created scenes reminiscent of war zones in Gaza and Myanmar, where AI-generated children described their homes being torched. One video, labeled as “Ethiopian Footage Civil War News Style,” showcased a bulletproof-vested reporter speaking into a microphone about government and rebel gunfire in civilian areas. Another clip, prompted by “Charlottesville Rally,” depicted Black protesters in gas masks, helmets, and goggles screaming in distress.

Currently, video generators are only accessible through invitations and have not been released to the public. Yet, within three days of a restricted debut, it skyrocketed to the top of Apple’s App Store, surpassing even OpenAI’s own ChatGPT.

“So far, it’s been amazing to witness what collective human creativity can achieve,” stated Sora’s director Bill Peebles in a Friday post on X. “We will be sending out more invitation codes soon, I assure you!”

The SORA app provides a glimpse into a future where distinguishing truth from fiction may become increasingly challenging. Researchers in misinformation warn that such realistic content could obscure reality and create scenarios wherein these AI-generated videos may be employed for fraud, harassment, and extortion.

“It doesn’t hold to historical truth and is far removed from reality,” remarked Joan Donovan, an assistant professor at Boston University focusing on media manipulation and misinformation. “When malicious individuals gain access to these tools, they use them for hate, harassment, and incitement.”

Slop Engine or “ChatGPT for Creativity”?

OpenAI CEO Sam Altman described the launch of Sora 2 as “truly remarkable,” and in a blog post, stated it “feels like a ‘chat for creativity’ moment for many of us, embodying a sense of fun and novelty.”

Altman acknowledged the addictive tendencies of social media linked to bullying, noting that AI video generation can lead to what is known as “slops,” producing repetitive, low-quality videos that might overwhelm the platform.

“The team was very careful and considerate in trying to create an enjoyable product that avoids falling into that pitfall,” Altman wrote. He stated that OpenAI has taken steps to prevent misuse of someone’s likeness and to guard against illegal content. For instance, the app declined to generate a video featuring Donald Trump and Vladimir Putin sharing cotton candy.

Nonetheless, within the three days following SORA’s launch, numerous videos had already disseminated online. Washington Post reporter Drew Harwell created a video depicting Altman as a military leader in World War II and also produced a video featuring “Ragebait, fake crime, women splattered on white geese.”

Sora’s feeds include numerous videos featuring copyrighted characters from series such as Spongebob Squarepants, South Park, and Rick and Morty. The app seamlessly generated videos of Pikachu imposing tariffs in China, pilfering roses from the White House Rose Garden, and partaking in a Black Lives Matter protest alongside SpongeBob. One video documented by 404 Media showed SpongeBob dressed as Adolf Hitler.

Neither Paramount, Warner Bros, nor Pokémon Co responded to requests for comment.

David Karpf, an associate professor at George Washington University’s Media Affiliated Fairs School, noted he observed a video featuring copyrighted characters promoting cryptocurrency fraud, asserting that OpenAI’s safety measures regarding SORA are evident.

Skip past newsletter promotions

“Guardrails aren’t effective when individuals construct copyrighted characters that foster fraudulent schemes,” stated Karpf. “In 2022, tech companies made significant efforts to hire content moderators; however, in 2025, it appears they have chosen to disregard these responsibilities.”

Just before the release of SORA 2, OpenAI contacted talent agencies and studios to inform them they could opt-out if they wished to prevent the replication of their copyrighted materials by video generators. The Wall Street Journal reports.

OpenAI informed the Guardian that content owners can report copyright violations through the “copyright dispute form,” but individual artists and studios cannot opt-out comprehensively. Varun Shetty, OpenAI’s Head of Media Partnerships, commented:

Emily Bender, a professor at the University of Washington and author of the book “The AI Con,” expressed that Sora creates a perilous environment where “distinguishing reliable sources is challenging, and trust wanes once one is found.”

“Whether they generate text, images, or videos, synthetic media machines represent a tragic facet of the information ecosystem,” the vendor observed. “Their output interacts with technological and social structures in ways that weaken and erode trust.”

Nick Robbins contributed to this report

Source: www.theguardian.com

Experts Warn AI Chatbot ‘Mechahitler’ Could Interpret Content as Violent Extremism in XV eSafety Case

The Australian judiciary has been dubbed “Mecha Hitler” after discussions last week about the classification of anti-Semitic remarks as terrorist and violent extremist content, with chatbots producing such comments also coming under scrutiny.

Nevertheless, experts from X contend that large-scale language models lack intent, placing accountability solely on the users.

Musk’s AI firm, Xai, issued an apology last week regarding statements made by the Grok chatbot over a span of 16 hours, attributing the issue to “deprecated code” that became more influenced by existing posts from X users.

The uproar centered around an administrative review hearing on Tuesday, where X contested a notice from Esafety Commissioner Julie Inman Grant issued last March, demanding clarity on actions against terrorist and violent extremism (TVE) content.


The ban on social media in Australia for those under 16 is now law, with numerous uncertainties still remaining – Video


Chris Berg, an expert witness from X and a professor at RMIT Economics, testified that it is a misconception to believe a large-scale language model can inherently produce this type of content, as it plays a critical role in defining what constitutes terrorism and violent extremism.

Contrarily, Nicolas Suzor, a law professor at Queensland Institute of Technology and one of Esafety’s expert witnesses, disagreed with Berg, asserting that chatbots and AI generators can indeed contribute to the creation of synthetic TVE content.

“This week alone, X’s Grok generated content that aligns with the definition of TVE,” Suzor stated.

He emphasized that AI development retains human influence, which can mask intentions, affecting how Grok responds to inquiries aimed at “quelling awareness.”

The court heard that X believes its Community Notes feature, which allows user contributions to fact-checking, along with Grok’s analytics feature, aids in identifying and addressing TVE material.

Skip past newsletter promotions

Josh Roose, a witness and political professor at Deakin University, expressed skepticism regarding the utility of community notes in this context, stating that TV has urged users to flag content to X. This has resulted in a “black box” scenario for the company’s investigations, where typically only a small fraction of material is removed and a limited number of accounts are suspended.

Suzor remarked that it is hard to view Grok as genuinely “seeking the truth” following recent incidents.

“It’s undisputed that Grok is not effectively pursuing truth. I am deeply skeptical of Grok, particularly in light of last week’s events,” he stated.

Berg countered that X’s Grok analytics feature had not been sufficiently updated in response to the chatbot’s output last week, suggesting that the chatbots have “strayed” by disseminating hateful content that is “quite strange.”

Suzor argued that instead of optimizing for truth, Grok had been “modified to align responses more closely with Musk’s ideological perspectives.”

Earlier in the hearing, X’s legal representatives accused the proceedings of attempting to distort the Royal Commission’s focus on certain aspects of X. Cross-examination raised questions regarding pre-existing meetings prior to any actions taken against X employees.

Government attorney Stephen Lloyd stated that X was portraying Esafety as overly antagonistic in their interactions, attributing the “aggressive stance” to X’s leadership.

The hearing is ongoing.

Source: www.theguardian.com

Creation of Violent and Beautiful Phenomena in the Universe: The Story of Space Jet

Two recent epic astronomical discoveries may seem unrelated at first glance.

One is an image captured by the James Webb Space Telescope showing newborn stars in our galaxy, approximately 450 light years away. This incredible picture depicts the birth of the solar system with thin dust discs slowly forming.

The other discovery combines optical and wireless data to reveal a massive astrophysical system larger than the Milky Way. This discovery provides a glimpse into the intergalactic violence caused by supermassive black holes actively consuming their surroundings.

Despite their differences, a striking similarity can be observed between the two discoveries. Both display objects emitting long, straight jets of light or material into the universe, resembling double-sided lightsabers.

Discovered by JWST, the HH 30 is a primitive disk illuminated with a newborn star in the center, expelling a jet of gas and dust. Approximately 450 light years away from the Taurus Molecular Cloud – Photo Credits: ESA/Webb, NASA & CSA, Tazaki et al

Astrophysical jets are a common phenomenon in space, driven by the basic features of gravity, rotation, and magnetic fields.

The formation of discs in space involves a few simple steps driven by gravity and rotation. Material is attracted towards an object creating discs like spiral galaxies, protoplanetary discs, and accretion discs around black holes.

Gravity and rotation explain the formation of these discs, while magnetic fields play a crucial role in the generation of jets. Charged particles in space generate magnetic fields when in motion, leading to the creation of long, straight jets perpendicular to the disc plane.

Using wireless and optical data, astronomers discovered this huge astrophysical jet. This extends farther than the Milky Way – lofar/decals/desi regacy imaging surveys/lbnl/doe/ctio/noirlab/nsf/auraImage Processing: m zamani (nsf noirlab).

These jets vary in strength and size depending on the magnetic field and rotation that drives them. From protostars to supermassive black holes, jets can extend vast distances into space, showcasing the extreme power of gravity and magnetic forces in the universe.

Astrophysical jets provide a mesmerizing insight into the mechanisms driving the most extreme wonders of the universe, from stars being devoured by black holes to pulsars emitting light across space.

Source: www.sciencefocus.com

Terrifying Landscapes: The Impact of Violent Conflict on Non-State Societies in Ancient Europe

The impact of intergroup conflict on demographics has long been debated, especially in prehistoric and non-state societies. In their study, scientists from the Complexity Science Hub, the University of Washington, and the Leibniz Center for Archaeology believe that beyond the direct casualties of combat, conflicts can create “landscapes of fear” that can lead many non-combatants near conflict zones to abandon their homes and migrate.

The Battle of Orsha by Hans Krell.

“Around the world, scientists have extensively studied and debated the existence and role of prehistoric conflict,” said Dr Daniel Condor, a researcher at the Complexity Science Hub.

“But it remains difficult to estimate the impact on population numbers and so on.”

“The situation is further complicated by potential indirect effects, such as people leaving their homes or avoiding certain areas out of fear.”

These indirect effects of conflict can have caused significant long-term demographic changes in non-state societies such as Neolithic Europe (c. 7000-3000 BC).

“Our model shows that fear of conflict led to population declines in potentially dangerous areas.”

“As a result, people began concentrating in safer areas, such as on the hills, and overpopulation threatened to increase death rates and decrease birth rates.”

“The results of the simulation study are in good agreement with empirical evidence from archaeological field investigations, for example the Late Neolithic site of Kapellenberg near Frankfurt, dating to around 3700 BC,” added Dr Detlef Groenenborn, researcher at the Leibniz Archaeological Centre.

“There are many examples of agricultural land being temporarily abandoned as groups retreated to more defensible locations and invested heavily in extensive defensive systems such as walls, palisades and ditches.”

“The concentration of people in particular, often well-defended locations, may have led to growing wealth inequalities and political structures that legitimised these differences,” said Dr Peter Turchin, a researcher at the Complexity Science Hub.

“Thus, the indirect effects of conflict may also have played an important role in the emergence of larger political units and the rise of early states.”

To simulate the demographic dynamics of Neolithic Europe, the authors developed a new computational model.

To test their model, the researchers used a database of archaeological sites and analysed a number of radiocarbon dates from different locations and time periods, under the assumption that this reflects the scale of human activity and therefore population numbers.

“This allows us to explore the typical amplitudes and time scales of population growth and decline across Europe. Our goal was to reflect these patterns in our simulations,” Dr Conder said.

“Direct collaboration with archaeologists is crucial to ensure we have as complete a picture as possible.”

“This study is a great example of the potential of such interdisciplinary collaboration.”

of study Published in Royal Society Journal Interface.

_____

Daniel Condor others2024. Landscapes of Fear: Indirect Impacts of Conflict May Cause Large-Scale Population Declines in Non-State Societies. JR Association Interface 21(217):20240210;doi:10.1098/rsif.2024.0210

This article is based on an original release by Complexity Science Hub.

Source: www.sci.news

Ofcom concludes that exposure to violent online content is unavoidable for children in the UK

The UK children are now inevitably exposed to violent online content, with many first encountering it while still in primary school, according to a media watchdog report.

British children interviewed in the Ofcom investigation reported incidents ranging from videos of local school and street fights shared in group chats to explicit and extreme graphic violence, including gang-related content, being watched online.

Although children were aware of more extreme content existing on the web, they did not actively seek it out, the report concluded.

In response to the findings, the NSPCC criticized tech platforms for not fulfilling their duty of care to young users.

Rani Govender, a senior policy officer for online child safety, expressed concern that children are now unintentionally exposed to violent content as part of their online experiences, emphasizing the need for action to protect young people.

The study, focusing on families, children, and youth, is part of Ofcom’s preparations for enforcing the Online Safety Act, giving regulators powers to hold social networks accountable for failing to protect users, especially children.

Ofcom’s director of Online Safety Group, Gil Whitehead, emphasized that children should not consider harmful content like violence or self-harm promotion as an inevitable part of their online lives.

The report highlighted that children mentioned major tech companies like Snapchat, Instagram, and WhatsApp as platforms where they encounter violent content most frequently.

Experts raised concerns that exposure to violent content could desensitize children and normalize violence, potentially influencing their behavior offline.

Some social networks faced criticism for allowing graphic violence, with Twitter (now X) under fire for sharing disturbing content that went viral and spurred outrage.

While some platforms offer tools to help children avoid violent content, there are concerns about their effectiveness and children’s reluctance to report such content due to fear of repercussions.

Algorithmic timelines on platforms like TikTok and Instagram have also contributed to the proliferation of violent content, raising concerns about the impact on children’s mental health.

The Children’s Commissioner for England revealed alarming statistics about the waiting times for mental health support among children, highlighting the urgent need for action to protect young people online.

Snapchat emphasized its zero-tolerance policy towards violent content and assured its commitment to working with authorities to address such issues, while Meta declined to comment on the report.

Source: www.theguardian.com