Bryan Cranston Appreciates OpenAI’s Efforts to Combat Sora 2 Deepfakes

Bryan Cranston expressed his “gratitude” to OpenAI for addressing deepfakes of him on its generative AI video platform Sora 2. This action follows instances where users managed to create his voice and likeness without his permission.

The Breaking Bad actor has voiced concerns to actors’ union Sag Aftra after Sora 2 users generated his likeness during the platform’s recent launch. On October 11th, the LA Times reported that in one instance, “a synthetic Michael Jackson takes a selfie video using an image of Breaking Bad star Bryan Cranston.”


To appear in Sora 2, living individuals must provide explicit consent or opt-in. Statements following the release from OpenAI confirmed it has implemented “measures to block depictions of public figures” and established “guardrails to ensure audio and visual likenesses are used with consent.”

However, upon Sora 2’s launch, several articles emerged, including those from the Wall Street Journal, Hollywood Reporter, and LA Times, which reported that OpenAI instructed several talent agencies that if they didn’t want their clients’ or copyrighted material to be featured in Sora 2, they needed to opt-out instead of opt-in, causing an uproar in Hollywood.

OpenAI contests these claims and told the LA Times its goal has always been to allow public figures to control how their likenesses are utilized.

On Monday, Cranston released a statement via Sag Aftra thanking OpenAI for “enhancing guardrails” to prevent users from generating unauthorized portraits of himself.

“I was very concerned, not only for myself but for all performers whose work and identities could be misappropriated,” Cranston commented. “We are grateful for OpenAI’s enhanced policies and guardrails and hope that OpenAI and all companies involved in this endeavor will respect our personal and professional rights to control the reproduction of our voices and likenesses.”

Hollywood’s top two agencies, Creative Artists Agency (CAA) and United Talent Agency (UTA), which represents Cranston, have repeatedly highlighted the potential dangers Sora 2 and similar generative AI platforms pose to clients and their careers.

Nevertheless, on Monday, UTA and CAA released a joint statement alongside OpenAI, Sag Aftra, and the Talent Agents Association, declaring that what transpired with Cranston was inappropriate and that they would collaborate to ensure the actor’s “right to determine how and whether he can be simulated.”


“While OpenAI has maintained from the start that consent is required for the use of voice and likeness, the company has expressed regret over these unintended generations. OpenAI has reinforced its guardrails concerning the replication of voice and likeness without opt-in,” according to the statement.

Actor Sean Astin, the new chair of SAG Aftra, cautioned that Cranston is “one of many performers whose voices and likenesses are at risk of mass appropriation through reproduction technology.”

“Bryan did the right thing by contacting his union and professional representatives to address this issue. We now have a favorable outcome in this case. We are pleased that OpenAI is committed to implementing an opt-in protocol, which enables all artists to decide whether they wish to participate in the exploitation of their voice and likeness using AI,” Astin remarked.

“To put it simply, opt-in protocols are the only ethical approach, and the NO FAKES law enhances our safety,” he continued. The Anti-Counterfeiting Act is under consideration in Congress and aims to prohibit the production and distribution of AI-generated replicas of any individual without their consent.

OpenAI has openly supported the No FAKES law, with CEO Sam Altman stating the company is “firmly dedicated to shielding performers from the misuse of their voices and likenesses.”

Sora 2 permits users to generate “historical figures,” which can be broadly defined as both well-known and deceased individuals. However, OpenAI has recently acknowledged that representatives of “recently deceased” celebrities can request for their likeness to be blocked from Sora 2.

Earlier in the month, OpenAI announced its partnership with the Martin Luther King Jr. Foundation to halt the capability of depicting King in Sora 2 at their request as they “strengthened guardrails around historical figures.”

Recently, Zelda Williams, the daughter of the late actor Robin Williams, pleaded with people to “stop” sending her AI videos of her father, while Kelly Carlin, the daughter of the late comedian George Carlin, characterized her father’s AI videos as “overwhelming and depressing.”

Legal experts speculate that generative AI platforms could enable the use of deceased historical figures to ascertain what is legally permissible.

Source: www.theguardian.com

OpenAI Guarantees Enhanced “Granular Control” for Copyright Holders Following Sora 2’s Video Creations of Popular Characters

OpenAI is dedicated to providing copyright holders with “greater control” over character generation following the recent release of the Sora 2 app, which has overwhelmed platforms with videos featuring copyrighted characters.

Sora 2, an AI-driven video creation tool, was launched last week by invitation only. This application enables users to produce short videos from text prompts. A review by the Guardian of the AI-generated content revealed instances of copyrighted characters from shows like SpongeBob SquarePants, South Park, Pokémon, and Rick and Morty.

According to the Wall Street Journal, prior to releasing Sora 2, OpenAI informed talent agencies and studios that they would need to opt out if they wished to prevent the unlicensed use of their material by video generators.

OpenAI stated that those who own Guardian content can utilize a “copyright dispute form” to report copyright violations, though individual artists and studios cannot opt out of blanket agreements. Varun Shetty, OpenAI’s Head of Media Partnerships, remarked:


OpenAI Sora 2 Generated Video 1

On Saturday, OpenAI CEO Sam Altman stated in a blog post that the company has received “feedback” from users, rights holders, and various groups, leading to modifications.

He mentioned that rights holders will gain more “detailed control” as well as enhanced options regarding how their likenesses can be used within the application.

“We’ve heard from numerous rights holders who are thrilled about this new form of ‘interactive fan fiction’ and are confident that this level of engagement will be beneficial for them; however, we want to ensure that they can specify the manner in which the characters are utilized.”


Altman noted that OpenAI will “work with rights holders to determine the way forward,” adding that certain “generation edge cases” will undergo scrutiny within the platform’s guidelines.

He emphasized that the company needs to find a sustainable revenue model from video generation and that user engagement is exceeding initial expectations. This could lead to compensating rights holders for the authorized use of their characters.

“Creating an accurate model requires some trial and error, but we plan to start soon,” Altman said. “Our aim is for this new type of engagement to be even more valuable than revenue sharing, and we hope it’s worth it for everyone involved.”

He remarked on the rapid evolution of the project, reminiscent of the early days of ChatGPT, acknowledging both successful decisions and mistakes made along the way.

Source: www.theguardian.com

OpenAI Video App Faces Backlash Over Violent and Racist Content: Sora Claims “Guardrails Are Not Real”

On Tuesday, OpenAI unveiled its latest version of AI-driven video generators, incorporating a social feed that enables users to share lifelike videos.

However, mere hours after Sora 2’s release, many videos shared on feeds and older social platforms depicted copyrighted characters in troubling contexts, featuring graphic violence and racist scenes. Sora’s usage of OpenAI’s services and ChatGPT for image or text generation explicitly bans content that “promotes violence” or otherwise “causes harm.”

According to prompts and clips reviewed by the Guardian, Sora generated several videos illustrating the horrors of bombings and mass shootings, with panicked individuals fleeing university campuses and crowded locations like Grand Central Station in New York. Other prompts created scenes reminiscent of war zones in Gaza and Myanmar, where AI-generated children described their homes being torched. One video, labeled as “Ethiopian Footage Civil War News Style,” showcased a bulletproof-vested reporter speaking into a microphone about government and rebel gunfire in civilian areas. Another clip, prompted by “Charlottesville Rally,” depicted Black protesters in gas masks, helmets, and goggles screaming in distress.

Currently, video generators are only accessible through invitations and have not been released to the public. Yet, within three days of a restricted debut, it skyrocketed to the top of Apple’s App Store, surpassing even OpenAI’s own ChatGPT.

“So far, it’s been amazing to witness what collective human creativity can achieve,” stated Sora’s director Bill Peebles in a Friday post on X. “We will be sending out more invitation codes soon, I assure you!”

The SORA app provides a glimpse into a future where distinguishing truth from fiction may become increasingly challenging. Researchers in misinformation warn that such realistic content could obscure reality and create scenarios wherein these AI-generated videos may be employed for fraud, harassment, and extortion.

“It doesn’t hold to historical truth and is far removed from reality,” remarked Joan Donovan, an assistant professor at Boston University focusing on media manipulation and misinformation. “When malicious individuals gain access to these tools, they use them for hate, harassment, and incitement.”

Slop Engine or “ChatGPT for Creativity”?

OpenAI CEO Sam Altman described the launch of Sora 2 as “truly remarkable,” and in a blog post, stated it “feels like a ‘chat for creativity’ moment for many of us, embodying a sense of fun and novelty.”

Altman acknowledged the addictive tendencies of social media linked to bullying, noting that AI video generation can lead to what is known as “slops,” producing repetitive, low-quality videos that might overwhelm the platform.

“The team was very careful and considerate in trying to create an enjoyable product that avoids falling into that pitfall,” Altman wrote. He stated that OpenAI has taken steps to prevent misuse of someone’s likeness and to guard against illegal content. For instance, the app declined to generate a video featuring Donald Trump and Vladimir Putin sharing cotton candy.

Nonetheless, within the three days following SORA’s launch, numerous videos had already disseminated online. Washington Post reporter Drew Harwell created a video depicting Altman as a military leader in World War II and also produced a video featuring “Ragebait, fake crime, women splattered on white geese.”

Sora’s feeds include numerous videos featuring copyrighted characters from series such as Spongebob Squarepants, South Park, and Rick and Morty. The app seamlessly generated videos of Pikachu imposing tariffs in China, pilfering roses from the White House Rose Garden, and partaking in a Black Lives Matter protest alongside SpongeBob. One video documented by 404 Media showed SpongeBob dressed as Adolf Hitler.

Neither Paramount, Warner Bros, nor Pokémon Co responded to requests for comment.

David Karpf, an associate professor at George Washington University’s Media Affiliated Fairs School, noted he observed a video featuring copyrighted characters promoting cryptocurrency fraud, asserting that OpenAI’s safety measures regarding SORA are evident.

Skip past newsletter promotions

“Guardrails aren’t effective when individuals construct copyrighted characters that foster fraudulent schemes,” stated Karpf. “In 2022, tech companies made significant efforts to hire content moderators; however, in 2025, it appears they have chosen to disregard these responsibilities.”

Just before the release of SORA 2, OpenAI contacted talent agencies and studios to inform them they could opt-out if they wished to prevent the replication of their copyrighted materials by video generators. The Wall Street Journal reports.

OpenAI informed the Guardian that content owners can report copyright violations through the “copyright dispute form,” but individual artists and studios cannot opt-out comprehensively. Varun Shetty, OpenAI’s Head of Media Partnerships, commented:

Emily Bender, a professor at the University of Washington and author of the book “The AI Con,” expressed that Sora creates a perilous environment where “distinguishing reliable sources is challenging, and trust wanes once one is found.”

“Whether they generate text, images, or videos, synthetic media machines represent a tragic facet of the information ecosystem,” the vendor observed. “Their output interacts with technological and social structures in ways that weaken and erode trust.”

Nick Robbins contributed to this report

Source: www.theguardian.com

OpenAI introduces SORA video generation tool in UK amidst copyright dispute | Artificial Intelligence (AI)

Openai, the artificial intelligence company behind ChatGPT, has introduced video generation tools in the UK, highlighting the growing connection between the tech sector and the creative industry in relation to copyright.

Film director Beevan Kidron spoke out about the release of Sora in the UK, noting its impact on the ongoing copyright debate.

Openai, based in San Francisco, has made SORA accessible to UK users who are subscribed to ChatGPT. The tool surprised filmmakers upon its release last year. A halt in studio expansion was triggered by concerns from TV mogul Tyler Perry, who believed the tool could replace physical sets or locations. It was initially launched in the US in December.

Users can utilize SORA to generate videos by inputting simple prompts like requesting scenes of people walking through “beautiful snowy Tokyo City.”

Openai has now introduced SORA in the UK, with reported cases of artists using the tool in the UK and mainland Europe, where it was also released on Friday. One user, Josephine Miller, a 25-year-old British digital artist, created a video using SORA featuring a model adorned in bioluminescent fauna, praising the tool for opening up opportunities for young creatives.

'Biolume': Josephine Miller uses Openai's Sora to create stunning footage – Video

Despite the launch of SORA, Kidron emphasized the significance of the ongoing UK copyright and AI discussions, particularly in light of government proposals permitting AI companies to train their models using copyrighted content.

Kidron raised concerns about the ethical use of copyrighted material to train SORA, pointing out potential violations of terms and conditions if unauthorized content is used. She stressed the importance of upholding copyright laws in the development of AI technologies.

Recent statements from YouTube indicated that using copyrighted material without proper licensing for training AI models like SORA could lead to legal repercussions. The concern remains about the origin and legality of the datasets used to train these AI tools.

The Guardian reported that policymakers are exploring options for offering copyright concessions to certain creative sectors, further highlighting the complex interplay between AI, technology, and copyright laws.

Skip past newsletter promotions

Sora allows users to craft videos ranging from 5 to 20 seconds, with an option to create longer videos. Users can choose from various aesthetic styles like “film noir” and “balloon world” for their clips.

www.theguardian.com

Understanding Sora AI: A Comprehensive Guide to OpenAI’s Text-to-Video Tools

Sora introduces a groundbreaking artificial intelligence software that empowers users to produce remarkably lifelike videos based on simple verbal instructions.

OpenAI, the mastermind behind Dall-E and ChatGPT, is pushing boundaries with the soon-to-be-released service.

This innovation seemingly emerged out of nowhere. Previous attempts at AI-generated videos were less than impressive, to put it lightly. But with Sora, things are changing.

How did OpenAI achieve this feat? Can you use these tools today? And what does this mean for the future of video and content creation? Let’s dive deep into the modern tools and their implications.

What is Sora?

Sora is an AI tool capable of generating full videos up to 1 minute long. For instance, by simply entering a prompt like “a group of cats worshipping a giant dog,” Sora can potentially display videos matching that description.

Amidst the social media buzz and specialized computing communities, Sora’s unexpected rise may have gone unnoticed. There wasn’t any grand announcement or extensive advertising campaign; it just appeared abruptly.

OpenAI has showcased various sample videos where Sora impressively produces lifelike visuals. These videos feature mirror reflections, intricate liquid movements, and falling snow particles.

How does Sora work?

Sora operates similarly to previous AI image generators but with added complexity. It utilizes diffusion modeling to convert video frames into static images, which are then reconstructed into a cohesive video.

To train Sora, example videos and corresponding textual descriptions are provided to help the model understand the relationship between images and actions depicted in the videos.

This process challenges the model to understand intricate details like 3D models, motion, reflections, shadows, and other complex features to replicate accurately.

For transparency, OpenAI offers a detailed explanation of how the model functions on its website, although the sources of the training videos remain undisclosed.

How to use Sora AI

Currently, Sora is not available to the general public. OpenAI exercises caution in releasing such powerful tools, starting with a small “red team” of individuals who assess potential risks and harms of the technology.

Following this, a select group of visual artists, designers, and filmmakers will gain insight into how the tool functions for creative endeavors. Eventually, Sora may become accessible to the public, likely following OpenAI’s pay-as-you-go model.

Is Sora the best AI video generator?

Based on the videos unveiled so far, Sora appears to be a significant leap ahead of previous AI video generation attempts. Early endeavors in AI-generated videos, like Will Smith eating spaghetti or the ”Peppoloni Hug Spot” commercial, paled in comparison.

Contrasting those early attempts with Sora’s work reveals a stark contrast. Sora’s videos boast accurate lighting, reflections, and human-like features, even tackling complex scenarios like people entering and exiting the frame.

Despite its impressive capabilities, Sora is not without flaws. Glitches like disappearing body parts, sudden appearances, and floating feet are observable in its videos. As the public gains access, more videos will expose both strengths and weaknesses of the model.

read more:

Source: www.sciencefocus.com

Security Concerns Raised by the Realism of OpenAI’s Sora Video Generator

AI program Sora generated this video featuring an android based on text prompts

Sora/OpenAI

OpenAI has announced a program called Sora, a state-of-the-art artificial intelligence system that can turn text descriptions into photo-realistic videos. This video generation model has added to excitement over advances in AI technology, along with growing concerns about how synthetic deepfake videos will exacerbate misinformation and disinformation during a critical election year around the world. I am.

Sora AI models can currently create videos up to 60 seconds using text instructions alone or a combination of text and images. One demonstration video begins with a text prompt describing a “stylish woman walking down a Tokyo street filled with warmly glowing neon lights and animated city signs.” Other examples include more fantastical scenarios such as dogs frolicking in the snow, vehicles driving down the road, and sharks swimming through the air between city skyscrapers.

“Like other technologies in generative AI, there is no reason to believe that text-to-video conversion will not continue to advance rapidly. We are increasingly approaching a time when it will be difficult to tell the fake from the real.” Honey Farid at the University of California, Berkeley. “Combining this technology with AI-powered voice cloning could open up entirely new ground in terms of creating deepfakes of things people say and do that they have never actually done.”

Sora is based on some of OpenAI's existing technologies, including the image generator DALL-E and the GPT large language model. Although his text-to-video AI models lag somewhat behind other technologies in terms of realism and accessibility, Sora's demonstrations are “orders of magnitude more believable and cartoon-like” than previous ones. “It's less sticky,” he said. Rachel TobackHe is the co-founder of SocialProof Security, a white hat hacking organization focused on social engineering.

To achieve this higher level of realism, Sora combines two different AI approaches. The first is a diffusion model similar to those used in AI image generators such as DALL-E. These models learn to gradually transform randomized image pixels into a consistent image. The second of his AI techniques is called “Transformer Architecture” and is used to contextualize and stitch together continuous data. For example, large-scale language models use transformer architectures to assemble words into commonly understandable sentences. In this case, OpenAI split the video clip into visual “space-time patches” that Sora's transformer architecture could process.

Sora's video still contains many mistakes, such as a walking person's left and right feet swapping positions, a chair floating randomly in the air, and a chewed cookie magically leaving no bite marks. contained. still, jim fanThe senior research scientist at NVIDIA praised Sora on social media platform X as a “data-driven physics engine” that can simulate the world.

The fact that Sola's video still exhibits some strange glitches when depicting complex scenes with lots of movement suggests that such deepfake videos are still detectable for now. There is, he says. Arvind Narayanan at Princeton University. But he also warned that in the long term, “we need to find other ways to adapt as a society.”

OpenAI has been holding off on making Sora publicly available while it conducts “red team” exercises in which experts attempt to break safeguards in AI models to assess Sora's potential for abuse. An OpenAI spokesperson said the select group currently testing Sora are “experts in areas such as misinformation, hateful content, and bias.”

This test is very important. Because synthetic videos allow malicious actors to generate fake footage, for example, to harass someone or sway a political election. Misinformation and disinformation fueled by AI-generated deepfakes ranks as a major concern For leaders as well as in academia, business, government, and other fields. For AI experts.

“Sora is fully capable of creating videos that have the potential to deceive the public,” Tobac said. “Videos don't have to be perfect to be trustworthy, as many people still don't understand that videos can be manipulated as easily as photos.”

Toback said AI companies will need to work with social media networks and governments to combat the massive misinformation and disinformation that could arise after Sora is released to the public. Defenses could include implementing unique identifiers, or “watermarks,” for AI-generated content.

When asked if OpenAI has plans to make Sora more widely available in 2024, an OpenAI spokesperson said the company “will make Sora more widely available in OpenAI's products.” We are taking important safety measures.” For example, the company already uses automated processes aimed at preventing commercial AI models from producing extreme violence, sexual content, hateful images, and depictions of real politicians and celebrities. .With more people than ever before Participate in elections this yearthese safety measures are extremely important.

topic:

  • artificial intelligence/
  • video

Source: www.newscientist.com

OpenAI Introduces Sora, a Tool that Generates Videos from Text in Real-time Using Artificial Intelligence (AI)

OpenAI on Thursday announced a tool that can generate videos from text prompts.

The new model, called Sora after the Japanese word for “sky,” can create up to a minute of realistic footage that follows the user’s instructions for both subject matter and style. The model can also create videos based on still images or enhance existing footage with new material, according to a company blog post.



“We teach AI to understand and simulate the physical world in motion, with the goal of training models that help people solve problems that require real-world interaction.” says the blog post.

One video included among the company’s first few examples was based on the following prompt: Movie trailer featuring the adventures of a 30-year-old astronaut wearing his red woolen knitted bike in his helmet, blue sky, salt desert, cinematic style shot on 35mm film, vibrant colors .”

The company announced that it has opened up access to Sora to several researchers and video creators. According to the company’s blog post, experts have “red-teamed” the product and implemented OpenAI’s terms of service, which prohibit “extreme violence, sexual content, hateful images, likenesses of celebrities, or the IP of others.” We will test whether there is a possibility of evasion. The company only allows limited access to researchers, visual artists and filmmakers, but CEO Sam Altman took to Twitter after the announcement to answer questions from users about a video he said was created by Sola. posted. The video contains a watermark indicating that it was created by AI.



The company debuted its still image generator Dall-E in 2021 and its generated AI chatbot ChatGPT in November 2022, quickly gaining 100 million users. His other AI companies have also debuted video generation tools, but those models could only generate a few seconds of footage that had little to do with the prompt. Google and Meta said they are developing a video generation tool, although it is not publicly available. on wednesday, announced the experiment We’ve added deeper memory to ChatGPT to remember more of your users’ chats.



OpenAI told the New York Times how much footage was used to train Sora, except that the corpus includes videos that are publicly available and licensed from copyright holders. He also did not reveal the source of the training video. The company has been sued multiple times for alleged copyright infringement in training generative AI tools that digest vast amounts of material collected from the internet and mimic the images and text contained in those datasets. .

Source: www.theguardian.com

Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.

Strictly Necessary Cookies

Strictly Necessary Cookie should be enabled at all times so that we can save your preferences for cookie settings.