Warner Music Partners with AI Song Generator Suno Following Lawsuit Settlement

Warner Music has entered into a licensing deal with the AI song generator Suno, following the resolution of a copyright infringement lawsuit against the service from a year prior.

As the third-largest music label globally, representing artists like Coldplay, Charli XCX, and Ed Sheeran, Warner becomes the first major record label to officially collaborate with Suno.

Under the terms of the agreement, users can create AI-generated songs on Suno by using simple text prompts, which may include the voices, names, and likenesses of Warner artists who have opted into the service.

Robert Kinkle, CEO of Warner Music Group, emphasized that this partnership demonstrates how artificial intelligence can develop into “professional artists” while showcasing “the values of music.”

“This innovative agreement with Suno is a win for the creative community that will benefit everyone involved,” he declared. “As Suno’s user base and monetization rapidly grow, we recognized this opportunity to create a revenue model and enhance fan experiences.”


As part of the agreement, Suno, often dubbed the ChatGPT of music, committed to modifying its platform to introduce a new, more strictly licensed model next year, including download limitations for users.

Suno announced that only paying members will be permitted to download its AI music creations, and even these members will be subject to extra fees for downloads, as well as a cap on the number of creations they can produce.

This initiative aims to tackle the proliferation of AI tracks generated on Suno, moving toward discontinuing the current version and avoiding an oversupply on streaming platforms.

This agreement comes shortly after Warner Music reached a settlement and partnership agreement with rival AI music generation platform Udio.

Previously, the world’s largest record label sued both Suno and Woodo for copyright violations, asserting their technologies misappropriated music and churned out millions of AI-generated songs without artist consent.

Universal Music, the leading label worldwide, was the first to announce settlements with these companies when they concluded an agreement with Audio last month. While Universal continues to pursue legal action against Suno, Sony Music has filed lawsuits against both Suno and Woody.

Skip past newsletter promotions

In conjunction with the deal with Warner Music, Suno has also acquired live music and concert discovery platform Songkick for an undisclosed figure.

The UK government is currently consulting on a new intellectual property framework for AI, which was initially expected to enable AI firms to use the creative community’s work without approval for model training.

This issue has ignited significant backlash from creators, who advocate for an opt-in system that would enable companies to identify and license their work while ensuring creators receive compensation when their work is utilized.

Technology Secretary Liz Kendall indicated last week her intention to “reset” the discussion, expressing support for artists’ appeals to prevent their work from being exploited by AI companies without remuneration.

Source: www.theguardian.com

Elon Musk’s Xai Secures Approval for Methane Gas Generator in Tennessee

Elon Musk’s AI venture, Xai, has received authorization to use methane gas generators at a significant data center located in Memphis, Tennessee. The county health department approved permits for 15 generators on Wednesday, a decision that has ignited protests from local communities and environmental advocates who argue that the generators will pollute the area.

“Our local officials are meant to safeguard our right to clean air, yet we are witnessing their failures,” stated Keshaun Pearson, the director of the Memphis community environmental nonprofit.

Xai established a sizable data center in Memphis about a year ago and introduced several portable methane gas generators to address the facility’s high energy demands. Although Xai lacked permissions for these generators, they seem to have exploited a loophole allowing the turbines to operate unless stationed at the same site for over 364 days.

In January, Xai sought approval for 15 generators. After extensive public meetings and community protests, the Shelby County Health Department approved the request. Satellite images provided to the Guardian by the Southern Environmental Law Center, a reputable nonprofit, reveal that at least 24 turbines remain operational at the Xai facility as of Tuesday.

“Xai welcomes the decision announced today,” said a company spokesperson in a statement. “Our on-site power generation utilizes state-of-the-art emission control technology, making this facility one of the cleanest in the nation.”

Skip past newsletter promotions

Environmental organizations question the actual emissions from Xai’s electricity usage. Research by the Southern Environmental Law Center indicates that these turbines could emit thousands of tons of harmful nitrogen oxides and toxic substances like formaldehyde.

“The decision to issue air permits to Xai for contaminated gas turbines dismisses the opinions of countless Memphians who opposed this permit,” remarked Amanda Garcia, a senior lawyer at the Southern Environmental Law Center. She noted that the health department is permitting another contaminant to set up operations in an already burdened community without adequate safety measures.

Situated in an industrial area of Memphis, Xai is surrounded by neighborhoods that have long struggled with pollution issues. This historically black community faces elevated rates of respiratory diseases and asthma and has a shorter lifespan compared to other regions of the city. Studies indicate that these areas show a cancer risk four times higher than the national average.

The pollution from Xai’s operations, particularly affecting nearby black neighborhoods, has drawn attention from civil rights groups like the NAACP, which has filed a lawsuit against the company. They allege that Xai is in breach of the Clean Air Act by unlawfully installing and operating a methane gas generator.

“The NAACP is hopeful that the 15 generators at Xai will enhance transparency and accountability regarding methane emissions, yet this decision overlooks the objections of the community. We remain committed to holding both Xai and the health department accountable,” they stated.

Source: www.theguardian.com

Synthesia’s AI Avatar Generator to Partner with Shutterstock for Video Trading

The UK startup, valued at $20 billion (£1.6 billion), is utilizing artificial intelligence to create lifelike avatars. They have recently partnered with Shutterstock, a stock footage company, to enhance their technology.

Synthesia is paying Shutterstock undisclosed amounts to access their video library for training their AI models. By incorporating these clips into their models, Synthesia aims to improve the realism, vocal tones, and body language of their avatars.

Synthesia has licensed the actors’ portraits for a three-year period and compensates them for up to six hours of filming work. Illustration: Synthesia.io

In a statement, Synthesia expressed their goal of enhancing the realism and expressiveness of AI-generated avatars through this partnership with Shutterstock. They aim to bring these avatars closer to human-like performance standards.

The collaboration has sparked discussions around the use of copyrighted material by AI companies without proper permission. The UK government’s proposal to relax copyright laws has faced criticism from creative industry experts.

Synthesia creates digital avatars using human actors, which are then utilized by various companies including clients like Lloyd’s Bank and British Gas. Their technology is also employed by organizations like the NHS, the European Commission, and the United Nations for different purposes.

Recently, Synthesia announced that they would provide stock options to the actors featured in their popular avatars. The company licenses the actors’ portraits for three years and compensates them for filming work.

Skip past newsletter promotions
Synthesia prohibits the use of stock avatars for political or news-related purposes. Illustration: Synthesia.io

Synthesia does not allow the use of stock avatars for political or news-related purposes. Instead, they utilize Shutterstock footage to enhance their models’ understanding of body language and workplace settings. This helps in creating more realistic scenarios for the avatars.

Established in 2017 by two Danish entrepreneurs and two academics, Synthesia, based in London, reached a valuation of $2.1 billion this year through a funding round that raised $180 million.

Beeban Kidron, a vocal critic of the government’s copyright policies, highlighted the significance of the Shutterstock agreement as an indication of the government’s flawed stance on copyright issues.

The government argues that current copyright regulations need to evolve to support the full potential of AI and technology in the creative industry, media, and technology sectors.

Source: www.theguardian.com

OpenAI introduces a new image generator feature for ChatGPT

Chatbots were originally designed to chat. But they can generate images too.

On Tuesday, Openai strengthened its ChatGpt chatbot with new technology designed to generate images from detailed, complex and unusual instructions.

For example, explaining a four-panel comic strip that includes the characters that appear on each panel and what they are saying to each other, technology can instantly generate elaborate comics.

Previous versions of ChatGPT can generate images, but by blending these broad concepts, it was not possible to create images reliably.

The new version of CHATGPT illustrates a broader change in artificial intelligence technology. After starting as a mere text-generating system, chatbots have transformed into a tool that combines chat with a variety of other abilities.

The technology also supports a new version of CHATGPT called GPT 4-O, allowing chatbots to receive and respond to voice commands, images and videos. You can even talk.

Released at the end of 2022, the original ChatGpt learned its skills by analyzing a huge amount of texts from across the internet. I learned to answer questions, write poetry and generate computer code.

Could not generate image. But about a year later, Openai released a new version of ChatGPT, which can generate images called Dall-E. However, ChatGpt and Dall-E were separate systems.

Now, Openai is building a single system that learns a wide range of skills from both text and images. When generating your own images, the system can pull out everything ChatGpt has learned from the Internet.

“This is a whole new kind of technology under the hood,” said Gabriel Goh, a researcher at Openai. “We don’t disband image generation and text generation. We hope that everything will be done together.”

Traditionally, AI image generators have had a hard time creating images that are significantly different from existing images. For example, if I asked the image generator to create an image of a bike with a triangular wheel, that was a pain.

Goh said the new ChatGPT could handle this type of request.

Images of “triangle vehicle” made using OpenAI’s new ChatGPT image generator.

Openai said starting Tuesday, this new version of ChatGPT will be available to people using both the free and paid versions of the chatbot. This includes both ChatGpt Plus, a $20-month service, and ChatGpt Pro, a $200 service that provides access to all the company’s latest tools.

(New York Times sued Openai and its partner Microsoft in December for copyright infringement of news content related to AI systems.)

Source: www.nytimes.com

Security Concerns Raised by the Realism of OpenAI’s Sora Video Generator

AI program Sora generated this video featuring an android based on text prompts

Sora/OpenAI

OpenAI has announced a program called Sora, a state-of-the-art artificial intelligence system that can turn text descriptions into photo-realistic videos. This video generation model has added to excitement over advances in AI technology, along with growing concerns about how synthetic deepfake videos will exacerbate misinformation and disinformation during a critical election year around the world. I am.

Sora AI models can currently create videos up to 60 seconds using text instructions alone or a combination of text and images. One demonstration video begins with a text prompt describing a “stylish woman walking down a Tokyo street filled with warmly glowing neon lights and animated city signs.” Other examples include more fantastical scenarios such as dogs frolicking in the snow, vehicles driving down the road, and sharks swimming through the air between city skyscrapers.

“Like other technologies in generative AI, there is no reason to believe that text-to-video conversion will not continue to advance rapidly. We are increasingly approaching a time when it will be difficult to tell the fake from the real.” Honey Farid at the University of California, Berkeley. “Combining this technology with AI-powered voice cloning could open up entirely new ground in terms of creating deepfakes of things people say and do that they have never actually done.”

Sora is based on some of OpenAI's existing technologies, including the image generator DALL-E and the GPT large language model. Although his text-to-video AI models lag somewhat behind other technologies in terms of realism and accessibility, Sora's demonstrations are “orders of magnitude more believable and cartoon-like” than previous ones. “It's less sticky,” he said. Rachel TobackHe is the co-founder of SocialProof Security, a white hat hacking organization focused on social engineering.

To achieve this higher level of realism, Sora combines two different AI approaches. The first is a diffusion model similar to those used in AI image generators such as DALL-E. These models learn to gradually transform randomized image pixels into a consistent image. The second of his AI techniques is called “Transformer Architecture” and is used to contextualize and stitch together continuous data. For example, large-scale language models use transformer architectures to assemble words into commonly understandable sentences. In this case, OpenAI split the video clip into visual “space-time patches” that Sora's transformer architecture could process.

Sora's video still contains many mistakes, such as a walking person's left and right feet swapping positions, a chair floating randomly in the air, and a chewed cookie magically leaving no bite marks. contained. still, jim fanThe senior research scientist at NVIDIA praised Sora on social media platform X as a “data-driven physics engine” that can simulate the world.

The fact that Sola's video still exhibits some strange glitches when depicting complex scenes with lots of movement suggests that such deepfake videos are still detectable for now. There is, he says. Arvind Narayanan at Princeton University. But he also warned that in the long term, “we need to find other ways to adapt as a society.”

OpenAI has been holding off on making Sora publicly available while it conducts “red team” exercises in which experts attempt to break safeguards in AI models to assess Sora's potential for abuse. An OpenAI spokesperson said the select group currently testing Sora are “experts in areas such as misinformation, hateful content, and bias.”

This test is very important. Because synthetic videos allow malicious actors to generate fake footage, for example, to harass someone or sway a political election. Misinformation and disinformation fueled by AI-generated deepfakes ranks as a major concern For leaders as well as in academia, business, government, and other fields. For AI experts.

“Sora is fully capable of creating videos that have the potential to deceive the public,” Tobac said. “Videos don't have to be perfect to be trustworthy, as many people still don't understand that videos can be manipulated as easily as photos.”

Toback said AI companies will need to work with social media networks and governments to combat the massive misinformation and disinformation that could arise after Sora is released to the public. Defenses could include implementing unique identifiers, or “watermarks,” for AI-generated content.

When asked if OpenAI has plans to make Sora more widely available in 2024, an OpenAI spokesperson said the company “will make Sora more widely available in OpenAI's products.” We are taking important safety measures.” For example, the company already uses automated processes aimed at preventing commercial AI models from producing extreme violence, sexual content, hateful images, and depictions of real politicians and celebrities. .With more people than ever before Participate in elections this yearthese safety measures are extremely important.

topic:

  • artificial intelligence/
  • video

Source: www.newscientist.com

Understanding ImageFX: A Comprehensive Guide to Google’s New AI Image Generator

Google is lagging behind in artificial intelligence. While OpenAI’s innovative Dall-E AI art image generator was released two years ago, Google only recently released its competing product.

The software, known as ImageFX, is backed by one of the largest technology companies and a substantial amount of data. So how is this data accumulated?

In brief, ImageFX has produced some impressive images that rival the best. But how does it work? Can it be accessed now? And have major problems in the AI art world been solved?

How to use Google ImageFX

Google ImageFX is currently available in countries like the United States, Kenya, New Zealand, and Australia.

If you attempt to access the site in a country like the UK, you’ll see a warning stating, “This tool is not yet available in your country.”

To access it from any of the currently available countries, visit Google’s AI Test Kitchen. Then create an account. Once everything is set up, your new prompt will be ready for use.

Even if you’re not in one of the listed countries, the website is still worth visiting. Google allows you to sign up for notifications about when the platform becomes available in your area.

How good is Google ImageFX?

There’s no denying that Google is late to the game. OpenAI’s Dall-E was released in January 2021, and Midjourney was released a year later. So did Google’s delay pay off in terms of quality?

Two images generated by ImageFX. On the left is a room with an art desk, and on the right is a painting of a vampire – Credit: ImageFX

The images released so far demonstrate that ImageFX is capable of producing content at a very high level. Detailed and contextual, ImageFX is an unsurprisingly capable image generator.

But that’s expected. AI art has made significant progress over the years, and Google’s main competitors are producing similarly high-quality work and have been doing so for much longer.

The significant advantage of ImageFX at the moment is that it’s free (in select countries). Both Midjourney and Dall-E are mainly behind paywalls or restricted services, so it’s worth making the most of ImageFX before any changes.

ImageFX also includes a unique feature called the “Expressive Chip.” This allows users to quickly edit the prompt and try a different search. For example, if you request a portrait of a woman, you can quickly switch this to an abstract, hand-drawn, or even oil painting.

How does it work?

Basically, Google ImageFX works like any other AI art generator. This involves several steps, starting with obtaining an image database large enough for training.

Google has not disclosed the source of its training data, but it likely includes a combination of internal sources, collaborations, and possibly web scraping and user-generated content.

Once the database is built, a model is trained on these images to learn the relationships between the words and visual concepts in the images, possibly through a diffusion model.

These models start with random noise in the image and are refined based on information from both the data and the accompanying text description. By repeating this process, you essentially learn the relationships between words, images, and context.

This training helps ImageFX and other AI image generators understand the prompts asked because it understands what words are associated with the images.

How is it linked to Google Bard?

Google Bard is probably the biggest competitor to the AI chatbot ChatGPT. Google has been working on the chatbot for some time and was released publicly in 2023.

If ImageFX is photography, Bard is understanding words and context. The goal is to combine the two to create the ultimate AI model, similar to OpenAI’s combination of ChatGPT and Dall-E (OpenAI’s image generator).

Google Bard is currently in testing but will soon be fully operational with the recently announced Google Gemini system.

This could theoretically mean a platform that asks models to create a board game and returns both the rules and lore, as well as all images, boards, and content. Or you could write a series of books with illustrations to go along with it.

Does ImageFX produce bad images?

There’s a problem with AI art…people. When trained on artwork from a human population and then utilized again by humans, less appropriate parts of the human brain tend to enter.

Previous AI art generators displayed sexist, biased, and sometimes intensely graphic images. This is a problem that all major technology companies are trying to tackle, including Google with ImageFX.

“All images generated with ImageFX are marked with SynthID, a tool developed by Google DeepMind that adds digital watermarks directly to the content we generate.” Google says:.

“SynthID watermarks are imperceptible to the human eye but can be detected for identification. Additionally, all images contain metadata, so when you encounter an AI-generated image, You can get more information.”

In addition to this, Google announced that it has improved the safety of its training data, reducing problematic output such as violent, offensive, or sexually explicit content. This extends to a reduced ability to create images of real people.

read more:

Source: www.sciencefocus.com