Netflix Introduces Generated AI in a New Show for the First Time

Netflix has pioneered the use of artificial intelligence in its television programming. The head of the streaming service has successfully made productions both more affordable and of higher quality.

According to Netflix co-CEO Ted Sarandos, the Argentine science fiction series El Eternauta (The Eternaut) was the first to utilize AI-generated footage.

“I believe AI offers a remarkable opportunity to assist creators in enhancing the quality of films and series, rather than merely reducing costs,” he shared with analysts following Netflix’s second-quarter report on Thursday.

He explained that the series, which depicts survivors facing a rapid and disastrous toxic snowfall, showcased collaboration between Netflix and Visual Effects (VFX) artists who employed AI to illustrate the downfall of Buenos Aires.

“Utilizing AI-enhanced tools enabled them to achieve remarkable outcomes at unprecedented speeds. In fact, the VFX sequences were finalized ten times faster than with traditional VFX methods,” he noted.

Sarandos pointed out that the integration of AI tools allows Netflix to finance the show at considerably lower costs compared to conventional large productions.

“The expenses for [special effects without AI] would have been unfeasible for that budget,” Sarandos mentioned.

Concerns around job security have emerged within the entertainment sector due to the introduction of generative AI, particularly affecting production and special effects roles.

In 2023, AI was a significant point of contention during a dual strike involving Hollywood actors and writers, leading to agreements that ensured emerging technologies are harnessed for the benefit of workers rather than to eliminate jobs.

Sarandos emphasized, “These tools are for real people doing real work with enhanced resources. Our creators have begun to experience the advantages of production via pre-visualization, shot planning, and definitely visual effects. I believe these tools will empower creators to broaden their storytelling horizons on screen.”

His remarks followed the announcement of Netflix achieving $11 billion in revenue for the quarter ending in June, reflecting a 16% year-over-year increase.

Skip past newsletter promotions

The company noted that better-than-expected results were driven by the popularity of the third and final season of the Korean thriller Squid Game.

Netflix anticipates that its small yet rapidly expanding advertising division will “almost double” this year.

“The quarter’s performance that surpassed expectations can be attributed to excellent content, increased pricing, and the momentum of ads all coming together,” remarked Mike Proulx, Vice President of Research at Forrester. “There is still more work required to enhance advertising capabilities, but the toughest challenges are behind Netflix with the comprehensive launch of its own ad tech platform.”

Source: www.theguardian.com

Google tools simplify the detection of posts generated by AI

SEI 226766255

The probability that one word follows another can be used to create watermarks for AI-generated text.

Vikram Arun/Shutterstock

Google uses artificial intelligence watermarks to automatically identify text generated by its Gemini chatbot, making it easier to distinguish between AI-generated content and human-written posts. This watermarking system could help prevent AI chatbots from being exploited for misinformation and disinformation, as well as fraud in schools and business environments.

Now, the technology company says it is making available an open-source version of its technology so that other generative AI developers can similarly watermark output from their large-scale language models. I am. Pushmeet Kohli Google DeepMind is the company's AI research team, combining the former Google Brain and DeepMind labs. “SynthID is not a silver bullet for identifying AI-generated content, but it is an important building block for developing more reliable AI identification tools,” he says.

Independent researchers expressed similar optimism. “There is no known way to reliably watermark, but I really think this could help detect some things like AI-generated misinformation and academic fraud,” he said. I say. scott aaronson at the University of Texas at Austin, where he previously worked on AI safety at OpenAI. “We hope that other leading language modeling companies, such as OpenAI and Anthropic, will follow DeepMind’s lead in this regard.”

In May of this year, Google DeepMind announced Google announced that it has implemented the SynthID method for watermarking AI-generated text and video from Google's Gemini and Veo AI services, respectively. The company recently published a paper in the journal nature SynthID generally performs better than similar AI watermarking techniques for text. The comparison involved evaluating how easily the responses from different watermarked AI models were detectable.

In Google DeepMind's AI watermarking approach, as a model generates a sequence of text, a “tournament sampling” algorithm subtly moves it toward selecting “tokens” of specific words that are detectable by associated software. Create a statistical signature. This process randomly pairs candidate word tokens in tournament-style brackets. The winner of each pair is determined by which one gets the highest score according to the watermark function. Winners advance through successive tournament rounds until there is one round remaining. The “layered approach” “further complicates the potential for reverse engineering and attempts to remove watermarks,” it said. Yellow Furong at the University of Maryland.

It said a “determined adversary” with vast computational power could remove such AI watermarks. Hanlin Zhang at Harvard University. But he said SynthID's approach makes sense given the need for scalable watermarking in AI services.

Google DeepMind researchers tested two versions of SynthID that represent a trade-off between making watermark signatures easier to detect in exchange for distorting the text typically produced by AI models. They showed that the undistorted version of the AI ​​watermark continued to work without noticeable impact on the quality of the 20 million text responses Gemini generated during live experiments.

However, the researchers also acknowledged that this watermarking works best on long chatbot responses that can be answered in a variety of ways, such as composing an essay or an email, as well as on math or coding questions. The response to this has not yet been tested.

Google DeepMind's team and others have stated the need for additional safeguards against misuse of AI chatbots, and Huang similarly recommended stronger regulation. “Requiring watermarks by law addresses both practicality and user adoption challenges and makes large language models more secure to use,” she says.

topic:

Source: www.newscientist.com