Concerns Rise Over OpenAI Sora’s Death: Legal Experts React to AI Missteps

LThat evening, I was scrolling through dating apps when a profile caught my eye: “Henry VIII, 34 years old, King of England, non-monogamous.” Before I knew it, I found myself in a candlelit bar sharing a martini with the most notorious dater of the 16th century.

But the night wasn’t finished yet. Next, we took turns DJing alongside Princess Diana. “The crowd is primed for the drop!” she shouted over the music as she placed her headphones on. As I chilled in the cold waiting for Black Friday deals, Karl Marx philosophized about why 60% off is so irresistible.

In Sora 2, if you can imagine it—even if you think you shouldn’t—you can likely see it. Launched in October as an invite-only app in the US and Canada, OpenAI’s video app hit 1 million downloads within just five days, surpassing the initial success of ChatGPT.




AI-generated deepfake video features portraits of Henry VIII and Kobe Bryant

While Sora isn’t the only AI tool producing videos from text, its popularity stems from two major factors. First, it simplifies the process for users to star in their own deepfake videos. After entering a prompt, a 10-second clip is generated in minutes, which can be shared on Sora’s TikTok-style platform or exported elsewhere. Unlike low-quality, mass-produced “AI slop” that clouds the internet, these videos exhibit unexpectedly high production quality.


The second reason for Sora’s popularity is its ability to generate portraits of celebrities, athletes, and politicians—provided they are deceased. Living individuals must give consent for their likenesses to be used, but “historical figures” seem to be defined as famous people who are no longer alive.

This is how most users have utilized the app since its launch. The main feed appears to be a bizarre mix of absurdity featuring historical figures. From Adolf Hitler in a shampoo commercial to Queen Elizabeth II stumbling off a pub table while cursing, the content is surreal. Abraham Lincoln beams at the TV exclaiming, “You’re not my father.” The Reverend Martin Luther King Jr. expresses his dream of having all drinks be complimentary before abruptly grabbing a cold drink and cursing.

However, not everyone is amused.

“It’s profoundly disrespectful to see my father’s image—who devoted his life to truth—used in such an insensitive manner,” Malcolm told the Washington Post. She was just two when her dad was assassinated. Now, Sora’s clips show the civil rights leader engaged in crude humor.

Zelda Williams, the daughter of actor Robin Williams, urged people to “stop” sending AI videos of her father through an Instagram post. “It’s silly and a waste of energy. Trust me, that’s not what he would have wanted,” she noted. Before his passing in 2014, he took legal steps to prevent his likeness from being used in advertising or digitally inserted into films until 2039. “Seeing my father’s legacy turned into something grotesque by TikTok artists is infuriating,” she added.

The video featuring the likeness of the late comedian George Carlin has been described by his daughter Kelly Carlin as “overwhelming and depressing” in a Blue Sky post.

Recent fatalities are also being represented. The app is filled with clips depicting Stephen Hawking enduring a “#powerslap” that knocks his wheelchair over, Kobe Bryant dunking over an elderly woman while yelling about something stuck inside him, and Amy Winehouse wandering the streets of Manhattan with mascara streaming down her face.

Those who have passed in the last two years (Ozzy Osbourne, Matthew Perry, Liam Payne) seem to be missing, indicating they may fall into a different category.

Each time these “puppetmasters” revive the dead, they risk reshaping the narrative of history, according to AI expert Henry Ajdar. “People are worried that a world filled with this type of content could distort how these individuals are remembered,” he explains.

Sora’s algorithm favors content that shocks. One of the trending videos features Dr. King making monkey noises during his iconic “I Have a Dream” speech. Another depicts Kobe Bryant reenacting the tragic helicopter crash that claimed both his and his daughter’s lives.

While actors and comedians sometimes portray characters after death, legal protections are stricter. Film studios bear the responsibility for their content. OpenAI does not assume the same liability for what appears on Sora. In certain states, consent from the estate administrator is required to feature an individual for commercial usage.

“We couldn’t resurrect Christopher Lee for a horror movie, so why can OpenAI resurrect him for countless short films?” questions James Grimmelman, an internet law expert at Cornell University and Cornell Tech.

OpenAI’s decision to place deceased personas into the public sphere raises distressing questions about the rights of the departed in the era of generative AI.

It may feel unsettling to have the likeness of a prominent figure persistently haunting Sora, but is it legal? Perspectives vary.

Major legal questions regarding the internet remain unanswered. Are AI firms protected under Section 230 and thus not liable for third-party content on their platforms? If OpenAI qualifies for Section 230 immunity, users cannot sue the company for content they create on Sora.

“However, without federal legislation on this front, uncertainties will linger until the Supreme Court takes up the issue, which might stretch over the next two to four years,” notes Ashken Kazarian, a specialist in First Amendment and technology policy.




OpenAI CEO Sam Altman speaks at Snowflake Summit 2025 on June 2 in San Francisco, California. He is one of the living individuals who permitted Sora to utilize his likeness. Photo: Justin Sullivan/Getty Images

In the interim, OpenAI must circumvent legal challenges by obtaining consent from living individuals. US defamation laws protect living people from defamatory statements that could damage their reputation. Many states have right-of-publicity laws that prevent using someone’s voice, persona, or likeness for “commercial” or “misleading” reasons without their approval.

Allowing the deceased to be represented this way is a way for the company to “test the waters,” Kazarian suggests.

Though the deceased lack defamation protections, posthumous publicity rights exist in states like New York, California, and Tennessee. Navigating these laws in the context of AI remains a “gray area,” as there is no established case law, according to Grimmelman.

For a legal claim to succeed, estates will need to prove OpenAI’s responsibility, potentially by arguing that the platform encourages the creation of content involving deceased individuals.

Grimmelmann points out that Sora’s homepage features videos that actively promote this style of content. If the app utilizes large datasets of historical material, plaintiffs could argue it predisposes users to recreate such figures.

Conversely, OpenAI might argue that Sora is primarily for entertainment. Each video is marked with a watermark to prevent it from being misleading or classified as commercial content.

Generative AI researcher Bo Bergstedt emphasizes that most users are merely experimenting, not looking to profit.

“People engage with it as a form of entertainment, finding ridiculous content to collect likes,” he states. Even if this may distress families, it might abide by advertising regulations.

However, if a Sora user creates well-received clips featuring historical figures, builds a following, and begins monetizing, they could face legal repercussions. Alexios Mantsalis, director of Cornell Tech’s Security, Trust, and Safety Initiative, warns that the “financial implications of AI” may include indirect profit from these platforms. Sola’s rising “AI influencers” could encounter lawsuits from estates if they gain financially from the deceased.

“Whack-a-Mole” Approach

In response to the growing criticism, OpenAI recently announced that representatives of “recently deceased” celebrities can request their likenesses be removed from Sora’s videos.

“While there’s a significant interest in free expression depicting historical figures, we believe public figures and their families should control how their likenesses are represented,” a spokesperson for OpenAI stated.


The parameters for “recent” have yet to be clarified, and OpenAI hasn’t provided details on how these requests will be managed. The Guardian received no immediate comment from the company.

The copyright-free-for-all strategy faced challenges after controversial content, such as “Nazi SpongeBob SquarePants,” circulated online and the Motion Picture Association of America accused OpenAI of copyright infringement. A week post-launch, the company transitioned to an opt-in model for rights holders.

Grimmelmann hopes for a similar adaptation in how depictions of the deceased are handled. “Expecting individuals to opt out may not be feasible; it’s a harsh expectation. If I think that way, so will others, including judges,” he remarks.

Bergstedt likens this to a “whack-a-mole” methodology for safeguards, likely to persist until federal courts establish AI liability standards.

According to Ajdel, the Sola debate hints at a broader question we will all confront: Who will control our likenesses in this age of composition?

“It’s a troubling scenario if people accept they can be used and exploited in AI-generated hyper-realistic content.”

Source: www.theguardian.com

Is It Too Late to Be Afraid? Readers React to the Controversial Rise of AI ‘Actors’ in Film

the recent announcement of AI ‘actor’ Tilly Norwood, touted as the next Scarlett Johansson, has sparked a swift backlash in Hollywood. Here’s what Guardian readers are saying about the contentious emergence of AI actors.

“Of course they’ll do that.”

The focus is on economically produced entertainment rather than artistic merit. AI isn’t about creating great art; it’s about cutting costs by replacing human talent and accelerating production. Netflix has amassed 300 million subscribers, generating $400 billion in revenue against $17 billion in content expenses. The quickest way for Netflix to boost profits is to reduce content costs through automation. They already use AI for content decisions, catering to every viewer preference, from high art to low-budget dating shows. Netflix is committed to impactful storytelling, yet can’t risk losing high-value subscribers. It’s similar with the multitude of languages for shows like “Love Is Blind,” ensuring fans don’t abandon ship. If AI enables tech companies to outpace traditional studios by being faster and cheaper, of course, they’ll do it. STAK2000


“I don’t understand humor.”

Comedy is where AI really struggles. It doesn’t grasp humor, timing, or what makes something engaging. We’ve seen technically impressive yet entirely lifeless dialogue that left us unimpressed. We tuned in expecting surprises but found it utterly dull. Mattro

“I’m not saying it’s impossible, it’s just that we’re not there yet.”

99% of AI-generated films consist of individuals speaking directly to the camera. We’ve yet to see compelling interactions among multiple AI-generated characters. Dialogue is fragmented; it seems AI cannot create distinct characters that interact meaningfully. I’m not saying it’s impossible, it just hasn’t happened yet. cornish_hen

“It will come back to bite them.”

Hollywood executives may bet on Tilly Norwood to slash costs and enhance profits. However, if film enthusiasts start creating their own content using generative AI, it might backfire on the industry. I hope those investing in human talent will succeed, resisting this reckless AI trend. Data Day

“The genie is not going back in the bottle.”

It’s astonishing how quickly this technology has progressed.

Even if AI never stars in leading roles, it will undoubtedly have a presence in major productions. It serves as a tool like any other, fundamentally changing certain facets of media.

Individuals affected by this shift (and they will be) must remain calm and consider future career paths. The genie won’t be contained. I’m sure traditional trades reacted strongly to innovations by Gottlieb Daimler and Henry Ford; if AI-generated content proves beneficial and cost-effective, it’s here to stay. Abbathehorse

“My main concern is the lack of education.”

Those involved in advancing AI are pushing boundaries. It’s up to the rest of us, particularly regulators, to hold them accountable when they overstep. My chief worry is the widespread ignorance regarding AI’s potential benefits and threats. Many who aren’t directly impacted by AI don’t perceive the risk. Dasinternaut

Tilly Norwood. Illustration: YouTube

“I doubt I could support a character that is completely AI.”

I hope films featuring AI are clearly labeled. This allows us, the paying audience, to make informed decisions regarding productions. I’m not convinced I can endorse purely AI-generated characters (except perhaps in animated films). We form connections with human actors and invest emotionally in their performances. It might take generations to navigate this shift, but history shows that even vinyl, once thought dead, can become a highly sought-after commodity. Matt08

“It’s reminiscent of a Ballard short story.”

As I read this, I reflected on the multitude of individuals behind creating this “star.” Coders, scriptwriters, marketing teams— a network of humans furthering careers, but not necessarily existing narratives. However, it feels unsettling when the program is crafted to mimic humanity. It evokes themes from Ballard’s stories. glider

“It’s too late to be scared.”

The time for fear has passed.

Hollywood prioritizes profit over artistry.

Studios may justify hiring photographers, makeup artists, set designers, and caterers with the argument that AI can perform those roles while saving costs.

Films featuring real people—actors and many behind-the-scenes roles—may soon become as rare as ballet or opera.

However, fans of franchises like “Fast & Furious” or the Marvel Universe might not mind; they often seek visual stimulation that AI can deliver. gray

“Just a bunch of guys sitting around a computer.”

What unsettles me is the apparent committee behind creating this character, obsessively defining attractiveness. Is your skin not smooth enough? Let’s iterate again. Are the proportions not appealing? Revise it.

Not only does this seem disconcerting, but it also reinforces narrow standards of attractiveness. Successful actors often conform to idealized norms, but at least nature or fate had a role in that. It’s not just a few individuals coding at their computers. bearvsshark

“A meaningless concept.”

Nonetheless, this notion is essentially futile. Acting requires collaboration. An AI “actor” necessitates real substitutes and someone to voice lines. You can produce a completely AI-generated film (essentially a CGI effort) or a human-centric film with AI characters, but the label of “AI actor” remains devoid of meaning. pyeshot

“The public doesn’t attend or appreciate actual art.”

For those claiming “this is a live theater row,” it’s clear you need to step outside your bubble. The public shows little interest in genuine art; they desire polished, commercial products, be it a catchy pop song or a superhero flick. As long as these superficial desires are nurtured, AI-generated “art” will face no backlash. Authentic art, including work from skilled human artists, requires funding, and resources for it are dwindling, threatening its survival. Yes, there may be exceptional pieces, but I suspect they will become increasingly rare unless more people become educated and learn to appreciate art’s inherent values. LondonAmerican2014

“AI slops are what happens when an idea is executed straight away.”

One day, hopefully soon, people will realize that the friction between idea and execution is where 90% of creativity resides.

Great art springs from thorough preparation and exceptional performances, requiring time and sometimes multiple attempts.

This need for friction applies to all creative endeavors, not just art. Even mundane businesses thrive on this dynamic.

AI slops emerge when concepts are rushed to completion. While they may appear effective initially, the ideas often lack depth. Shakeydave

Source: www.theguardian.com

Study Suggests Vegetarians React to Eating Meat as They Would to Consuming Waste

Vegetarians have a similar reaction to meat as they do to eating feces or human flesh, according to recent research from Oxford University.

A study involving 252 vegetarians and 57 meat eaters examined whether this aversion was influenced by the source of the food being plant or animal-based.

Initially, participants were shown a range of vegetables commonly disliked, including raw onions, green olives, sprouts, beetroot, and overripe fruit, and were asked to envision eating them. Both groups expressed “disgust” towards these vegetables. Essentially, the flavors and textures were perceived negatively.

Next, participants looked at pre-cooked chicken, bacon, and steak. Here, the vegetarians reacted quite differently. They experienced feelings of nausea, voiced ideological objections, and stated they found anything that had been in contact with meat unappealing.

All the meat is clean and cooked.

The reactions of aversion were similar to those elicited when participants were asked to imagine consuming human feces or the flesh of humans or dogs (the meat was actually just plain meat labeled accordingly—no harm came to any dogs, although a few humans faced bad treatment).

“Disgust is an ancient evolutionary mechanism observed in various species and acts as a straightforward response to ‘bad’ preferences, primarily linked to bitter and sour tastes,” stated Elisa Becker, the lead researcher from Oxford University, in an interview with BBC Science Focus.

“Aversion, in contrast, is likely a uniquely human response stemming from more complex thoughts about food and its meanings.”

The distinction between these reactions may lie in evolutionary history. Aversion enabled early humans to avoid toxic plants with unpleasant flavors, while disgust developed as a more sophisticated reaction to the unseen risks associated with meat, which can harbor pathogens and parasites.

“Disgust does not arise solely from taste but is triggered by animal products, including meat and our own bodily substances. These are prime carriers for pathogens,” Becker explained. “The purpose of disgust is to protect us from toxins and diseases.”

This insight may assist initiatives aimed at promoting sustainable diets by altering perceptions of certain foods.

“It could be beneficial for people seeking to reduce their meat consumption or increase vegetable intake,” Becker remarked. “Novel, more sustainable protein sources (like insects or lab-grown meat) can often invoke disgust. Understanding this instinct can help us overcome it.”

About our experts

Elisa Becker is a postdoctoral researcher at the Faculty of Primary Care Health Sciences at Oxford University. She investigates behavioral change interventions that assist individuals in reducing meat consumption, focusing on the emotional processing of meat and the effectiveness of various strategies.

read more

Source: www.sciencefocus.com

Fact-checkers react negatively to Meta’s decision to transition to a scrappy role

Founder of Facebook
Mark Zuckerberg

His company Meta announced on Tuesday that it would scrap the facts.
He accused the US checkers of making biased decisions and said he wanted greater freedom of speech. Meta uses third-party independent fact checkers from around the world. Here, one of them, who works at the Full Fact organization in London, explains what they do and their reaction to Zuckerberg’s “mind-boggling” claims.

I was a fact checker at Full Fact in London for a year, investigating questionable content on Facebook, X and newspapers. Our diet is filled with disinformation videos about wars in the Middle East and Ukraine, as well as fake AI-generated video clips of politicians, which are becoming increasingly difficult to disprove. There is. Colleagues are tackling coronavirus disinformation, misinformation about cancer treatments, and there’s a lot of climate-related talk as there are more hurricanes and wildfires.

As soon as you log on at 9am, you’re assigned something to watch. By accessing Meta’s system, you can see which posts are most likely to be false. In some cases, there may be 10 or 15 potentially harmful things and it can be overwhelming. But you can’t check everything.

If a post is a little wild but not harmful, like this AI-generated image of the Pope wearing a giant white puffer coat, we might leave it. But if it’s a fake image of Mike Tyson holding a Palestinian flag, we’re more likely to address it. We propose them in the morning meeting and are then asked to start checking.

Yesterday I was working on a deepfake video in which Keir Starmer said many of the claims about Jimmy Savile were frivolous and that was why he was not prosecuted at the time. We’re getting a lot of engagement. Starmer’s mouth did not look right and did not appear to say anything. It seemed like a false alarm. I immediately started doing a reverse image search and discovered that the video was taken from the Guardian newspaper in 2012. The original was of much higher quality. The area around his mouth is very blurry and you can see exactly what he’s saying when you compare it to what he shares on social media. We contacted the Guardian for comment on the original Downing Street. You can also get in touch with various media forensics and deepfake AI experts.

Some misinformation continues to resurface. There is a particular video of a gas station explosion in Yemen last year that has been reused as either a bombing in Gaza or a Hezbollah attack on Israel.

Fact checkers collect examples of how that information has appeared on social media in the past 24 hours or so, often times like the number of likes or shares, and how do they know when it’s incorrect? indicates.

Attaching fact checks to Facebook posts requires two levels of review. Senior colleagues question every leap in logic we make. For recurring claims, this process can be completed in half a day. New, more complex cases may take closer to a week. The average is about 1 day. It can be frustrating to go back and forth at times, but you want to be as close to 100% sure as possible.

It was very difficult to hear Mark Zuckerberg say that fact checkers are biased on Tuesday. Much of the work we do is about being fair, and that’s instilled in us. I feel it is a very important job to bring about change and provide good information to people.

This is something I wanted to do in my previous job in local journalism, go down rabbit holes and track down sources, but I didn’t have many opportunities. It was very Churnalism. As a local reporter, I was concerned and felt helpless at the amount of conspiracy theories people were seriously engaging with and believing in Facebook groups.

At the end of the day, it can be difficult to switch off. I’m still thinking about how to prove something as quickly as possible. When I see things like content stock prices constantly going up, I get a little worried. But when a fact check is published, there is a sense of satisfaction.

Zuckerberg’s decision was unfortunate. We put a lot of effort into this and we think it’s really important. But we renew our resolve to fight the good fight. Misinformation will never go away. We will continue to be here and fight against it.

Source: www.theguardian.com

Investors React Poorly to CyberCab Self-Driving Car, Tesla’s Value Drops $60 Billion

Tesla shares dropped almost 9% on Friday, erasing roughly $60 billion from the company’s market value following the underwhelming announcement of its highly anticipated robotaxis that failed to impress investors.

The electric vehicle manufacturer’s stock plummeted to $217 at the close of the market after CEO Elon Musk revealed a much-hyped self-driving car at an event in Hollywood. Since the start of the year, the stock price has declined by about 12%.

Musk stated that Tesla would commence the development of a fully autonomous CyberCab by 2026 priced under $30,000 and introduced a van capable of transporting 20 people autonomously within the city, aiming to revolutionize parking.

Prior to the event, he tweeted: “And within 50 years all transportation will be fully autonomous.”

During the presentation, he mentioned that parking would no longer be necessary in the city.

However, analysts were disappointed by the lack of specifics at the event concerning Tesla’s projects and other developments. Musk has a track record of making ambitious projections about future products that often fail to materialize within set deadlines or at all.

Skip past newsletter promotions

Royal Bank of Canada analyst Tom Narayan remarked in an investor note that the event lacked specifics. He stated, “Investors we spoke to during the event felt that the event glossed over actual figures and timelines.”

“These shortcomings are common at Tesla events, which appear to focus more on promoting and branding Tesla’s vision rather than providing concrete data for analysis. Consequently, we anticipate a decline in the stock price.”

Narayan also mentioned that some investors were anticipating a preview of an affordable car equipped with pedals and a steering wheel set to be launched next year, but no such announcement was made.

Garrett Nelson, an analyst at investment research firm CFRA, expressed disappointment with the revelations about the CyberCab and the lack of information regarding more economical vehicles.

He said: “The event raised numerous questions but was surprisingly brief and resembled more of a controlled demonstration than a comprehensive presentation. We were unsatisfied with the absence of details about [Tesla’s] near-term product plans, which include a more affordable model and the Roadster. Musk previously mentioned on a conference call that production of these models is set for 2025.”

Source: www.theguardian.com

Investors React to Plans for Increased Spending on AI, Leading to $190 Billion Drop in Meta’s Value

Meta’s stock price tumbled 15% on Wall Street Thursday in response to commitments to ramp up spending on artificial intelligence, resulting in approximately $190 billion being wiped off the market value of the Facebook and Instagram parent company.

During a conference call on Wednesday, Mark Zuckerberg, Meta’s CEO, emphasized the necessity of increasing spending on AI technology in order to generate “significant revenue” from the company’s new AI products. “There is a need for an increase,” he stated.

The stock price of Meta had previously benefited from stringent cost-cutting measures in 2023, which Zuckerberg referred to as “the year of efficiency.” However, investors were spooked when Meta raised the upper limit of its capital spending guidance from $37 billion to $40 billion on Wednesday.

Meta recently launched Llama 3, the latest iteration of its AI model and image generator, which can update images in real-time while users input prompts. This update also sees the expansion of Meta AI, the company’s AI-powered assistant, to more than 10 markets outside the US, including Australia, Canada, Singapore, Nigeria, and Pakistan. Chris Cox, Meta’s chief product officer, mentioned that the company is still working on implementing this in Europe.

The decline in stock price comes after Meta Inc. experienced a record increase in market value in February, adding $196 billion to its market capitalization following the announcement of its first dividend, which was, at the time, the largest single-day gain in Wall Street history. However, Nvidia, a prominent supplier of chips for AI models, later surpassed this record with a $277 billion profit.

Source: www.theguardian.com