Disney stated that its AI image generator Midjourney was developed using films like ‘The Lion King’
Maximum Film/Alamy
Since the launch of ChatGPT, OpenAI’s generative AI chatbot, three years ago, we’ve witnessed dramatic shifts across various aspects of our lives. However, one area that remains unchanged is adherence to copyright law. We still strive to uphold pre-AI standards.
It’s widely recognized that leading AI firms have developed models by harvesting data from the internet, including copyrighted content, often without securing prior approval. This year, prominent copyright holders have retaliated, filing various lawsuits against AI companies for alleged copyright violations.
The most notable lawsuit was initiated in June by Disney and Universal, claiming that the AI image generation platform Midjourney was trained using their copyrighted materials and enabled users to produce images that “clearly included and replicated Disney and Universal’s iconic characters.”
The proceedings are still underway, with Midjourney’s recent response in August asserting, “The limited monopoly granted by copyright must yield to fair use,” suggesting that the outcome would be transformative, permitting AI companies to educate models with copyrighted works.
Midjourney’s statements highlight that the copyright debate is more complex than it might seem at first glance. “Many believed copyright would serve as the ultimate barrier against AI, but that’s not entirely true,” remarks Andres Guadams from the University of Sussex, UK, expressing surprise at how little impact copyright has had on the progress of AI enterprises.
This is occurring even as some governments engage in discussions on the matter. In October, the Japanese government made an official appeal to OpenAI, urging the company behind the Sora 2 AI video generator to honor the intellectual property rights of its culture, including its manga and beloved video games like those from Nintendo.
Sora 2 is embroiled in further controversy due to its capability to generate realistic footage of real individuals. OpenAI recently tightened restrictions on representations of Martin Luther King Jr. after family representatives raised concerns about a depiction of his iconic “I Have a Dream” speech that included inappropriate sounds.
“While free speech is crucial when portraying historical figures, OpenAI believes that public figures and their families should ultimately control how their likenesses are represented,” the company stated. This restriction was only partially effective, as celebrities and public figures must still opt-out from having their images utilized in Sora 2. Some argue this remains too permissive. “No one should have to tell OpenAI if they wish to avoid being deepfaked,” states Ed Newton Rex, a former AI executive and founder of the campaign group Fairly Trained.
In certain instances, AI companies face legal challenges over their practices, as highlighted by one of the largest proposed lawsuits from the past year. In September, three authors accused Anthropic, the firm behind the Claude chatbot, of deliberately downloading over 7 million pirated books for training its AI models.
A judge reviewed the case and concluded that even if the firm had utilized this material for training, it could be considered a sufficiently “transformational” use that wouldn’t inherently infringe copyright. However, the piracy allegations were serious enough to warrant trial proceedings. Anthropic ultimately decided to settle the lawsuit for at least $1.5 billion.
“Significantly, AI companies appear to be strategizing their responses and may end up disbursing a mix of settlements and licensing deals,” Guadams noted. “Only a small number of companies are likely to collapse due to copyright infringement lawsuits,” he adds. “AI is here to stay, even if many established players may fail due to litigation and market fluctuations.”
A court in Munich has determined that OpenAI’s ChatGPT breached German copyright laws by utilizing popular songs from renowned artists to train its language model, which advocates for the creative industry have labeled a pivotal ruling for Europe.
The Munich District Court supported the German music copyright association GEMA, stating that ChatGPT gathered protected lyrics from well-known musicians to “learn” them.
GEMA, an organization that oversees the rights of composers, lyricists, and music publishers with around 100,000 members, initiated legal action against OpenAI in November 2024.
This case was perceived as a significant test for Europe in its efforts to prevent AI from harvesting creative works. OpenAI has the option to appeal the verdict.
ChatGPT lets users pose inquiries and issue commands to a chatbot, which replies with text that mimics human language patterns. The foundational model of ChatGPT is trained on widely accessible data.
The lawsuit focused on nine of the most iconic German hits from recent decades, which ChatGPT employed to refine its language skills.
This included Herbert Groenemeyer’s 1984 synthpop hit manners (male), and Helen Fischer’s Atemlos Durchi die Nacht (Breathless Through the Night), which became the unofficial anthem for the German team during the 2014 World Cup.
The judge ruled that OpenAI must pay undisclosed damages for unauthorized use of copyrighted materials.
Kai Welp, GEMA’s general counsel, mentioned that GEMA is now looking to negotiate with OpenAI about compensating rights holders.
The San Francisco-based company, co-founded by Sam Altman and Elon Musk, argued that its language learning model utilizes the entire training set rather than retaining or copying specific songs, as stated by the Munich court.
OpenAI contended that since the outputs are created in response to user prompts, the users bear legal responsibility, an argument the court dismissed.
GEMA celebrated the ruling as “Europe’s first groundbreaking AI decision,” indicating that it might have ramifications for other creative works.
Tobias Holzmuller, the company’s CEO, remarked that the verdict demonstrates that “the internet is not a self-service store, and human creative output is not a free template.”
“Today, we have established a precedent to safeguard and clarify the rights of authors. Even AI tool operators like ChatGPT are required to comply with copyright laws. We have successfully defended the livelihood of music creators today.”
The Berlin law firm Laue, representing GEMA, stated that the court’s ruling “creates a significant precedent for the protection of creative works and conveys a clear message to the global tech industry,” while providing “legal certainty for creators, music publishers, and platforms across Europe.”
The ruling is expected to have ramifications extending beyond Germany as a legal precedent.
The German Journalists Association also praised the decision as a “historic triumph for copyright law.”
OpenAI responded that it would contemplate an appeal. “We disagree with the ruling and are evaluating our next actions.” The statement continued, “This ruling pertains to a limited set of lyrics and does not affect the millions of users, companies, and developers in Germany who utilize our technology every day.”
Furthermore, “We respect the rights of creators and content owners and are engaged in constructive discussions with various organizations globally that can also take advantage of this technology.”
OpenAI is currently facing lawsuits in the U.S. from authors and media organizations alleging that ChatGPT was trained on their copyrighted materials without consent.
An artificial intelligence company based in London has achieved a significant victory in a High Court case that scrutinized the legality of an AI model using extensive copyrighted data without authorization.
Stability AI, led by Oscar-winning Avatar director James Cameron, successfully defended itself against allegations from Getty Images, claiming that it infringed on the international photography agency’s copyright.
This ruling is seen as a setback for copyright holders’ exclusive rights to benefit from their creations. Rebecca Newman, a legal director at Addleshaw Goddard, cautioned that it suggests “the UK derivative copyright system is inadequate to protect creators”.
There was evidence indicating that Getty Images were utilized in training Stability’s model, which enables users to generate images via text prompts. In certain instances, Stability was also found to violate Getty’s trademarks.
Judge Joanna Smith remarked that determining the balance between the interests of the creative industries and AI sectors holds “real social significance.” However, she could only address relatively limited claims as Getty had to withdraw parts of its case during the trial this summer.
Getty Images initiated legal action against Stability AI for violations of its intellectual property rights, claiming the AI company scraped and replicated millions of images with “complete indifference to the content of the training data.”
This ruling comes amid ongoing debates about how the Labour government should legislate on copyright and AI matters, with artists and authors like Elton John, Kate Bush, Dua Lipa, and Kazuo Ishiguro advocating for protections. In contrast, tech firms are seeking broader access to copyrighted material to develop more powerful generative AI systems.
The government is conducting a consultation regarding copyright and AI, stating: “The uncertainty surrounding the copyright framework is hindering the growth of both the AI and creative sectors. This situation must not persist.”
Lawyers at Mishcon de Reya, pursuing this matter, are contemplating introducing a “text and data mining exception” to the UK copyright law, which would enable copyrighted works to be utilized for training AI models unless rights holders opt-out.
Due to a lack of evidence indicating that the training took place in the UK, Getty was compelled to retract its original copyright claim. Nevertheless, the company proceeded with its lawsuit, asserting that Stability continues to use copies of visual assets, which it describes as the “lifeblood” of its business. The lawsuit alleges trademark infringement and “spoofing,” as some generated images bore Getty’s watermark.
Highlighting the complexities of AI copyright litigation, the group essentially argued that Stability’s image generation model, known as Stable Diffusion, constitutes an infringing copy, as its creation would represent copyright infringement if produced in the UK.
The judge determined that “AI models like Stable Diffusion that do not (and never have) stored or reproduced copyrighted works are not ‘infringing copies.'” She declined to adjudicate on the misrepresentation claims but ruled in favor of some of Getty’s trademark infringement claims regarding the watermark.
In a statement, Getty Images remarked: “We are profoundly worried that even well-resourced organizations like Getty Images face considerable challenges in safeguarding creative works due to the absence of transparency requirements. We have invested millions with one provider alone, but we must continue our pursuit elsewhere.”
“We urge governments, including the UK, to establish more robust transparency regulations. This is crucial to avoid expensive legal disputes and ensure creators can uphold their rights.”
Stability AI’s General Counsel, Christian Dowell, stated, “We are pleased with the court’s ruling on the remaining claims in this case. Although Getty’s decision to voluntarily withdraw most of the copyright claims at the trial’s conclusion left the court with only a fraction of the claims, this final decision addresses the core copyright issues. We appreciate the time and effort the court has dedicated to resolving the significant matters in this case.”
Open AI has severed its relationship with the Technology Council of Australia due to copyright limitations, asserting that its AI models “will be utilized in Australia regardless.”
Chris Lehane, the chief international affairs officer of the company behind ChatGPT, delivered a keynote address at SXSW Sydney on Friday. He discussed the geopolitics surrounding AI, the technological future in Australia, and the ongoing global discourse about employing copyrighted materials for training extensive language models.
Scott Farquhar, CEO of the Tech Council and co-founder of Atlassian, previously remarked that Australia’s copyright laws are “extremely detrimental to companies investing in Australia.”
Sign up: AU breaking news email
In August, it was disclosed that the Productivity Commission was evaluating whether tech companies should receive exemptions from copyright regulations that hinder the mining of text and data for training AI models.
However, when asked about the risk of Australia losing investment in AI development and data centers if it doesn’t relax its fair use copyright laws, Mr. Lehane responded to the audience:
“No…we’re going to Australia regardless.”
Lehane stated that countries typically adopt one of two stances regarding copyright restrictions and AI. One stance aligns with a US-style fair use copyright model, promoting the development of “frontier” (advanced, large-scale) AI; the other maintains traditional copyright positions and restricts the scope of AI.
“We plan to collaborate with both types of countries. We aim to partner with those wanting to develop substantial frontier models and robust ecosystems or those with a more limited AI range,” he expressed. “We are committed to working with them in any context.”
When questioned about Sora 2 (Open AI’s latest video generation model) being launched and monetized before addressing copyright usage, he stated that the technology benefits “everyone.”
“This is the essence of technological evolution: innovations emerge, and society adapts,” he commented. “We are a nonprofit organization, dedicated to creating AI that serves everyone, much like how people accessed libraries for knowledge generations ago.”
AI opened on Friday stopped the ability to produce a video featuring the likeness of Martin Luther King Jr. after his family’s complaints about the technology.
Lehane also mentioned that the competition between China and the United States in shaping the future of global AI is “very real” and that their values are fundamentally different.
“We don’t see this as a battle, but rather a competition, with significant stakes involved,” he stated, adding that the U.S.-led frontier model “will be founded on democratic values,” while China’s frontier model is likely to be rooted in authoritarian principles.
“Ultimately, one of the two will emerge as the player that supports the global community,” he added.
When asked if he had confidence in the U.S. maintaining its democratic status, he responded: “As mentioned by others, democracy can be a convoluted process, but the United States has historically shown the ability to navigate this effectively.”
He also stated that the U.S. and its allies, including Australia, need to generate gigawatts of energy weekly to establish the infrastructure necessary for sustaining a “democratic lead” in AI, while Australia has the opportunity to create its own frontier AI.
He emphasized that “Australia holds a very unique position” with a vast AI user base, around 30,000 developers, abundant talent, a quickly expanding renewable energy sector, fiber optic connectivity with Asia, and its status as a Five Eyes nation.
TTake a look at Sam Altman. Seriously, check Google Images, and you’ll notice an abundance of photos featuring the endearing Lost Puppy from Silicon Valley, showcasing the OpenAI chief sporting a clever grin. Yet, I suggest hiding the lower half of his face in these images. Suddenly, Sam’s expression takes on the haunting gaze of the boyfriend of a missing woman, pleading for her return: “Please come home, Sheila. We’re worried about you, and we just want you back.”
Don’t be alarmed if the humor feels misplaced, crude, or somewhat manipulative. I rely on OpenAI’s guiding principle: reciprocity. Content creators must formalize and painstakingly select subjects for use in generated content. outside to be utilized in any manner users see fit. I haven’t received any word from Sam, leading me to believe I know precisely where he is because I placed Sheila there. After all, he seems to fit the archetype that often accompanies the term “visibly.”
For Sam, the past fortnight has revolved around the debut of the AI video generator Sora 2 (a remarkable enhancement from the Sora of just ten months prior) and his entanglement in issues surrounding copyrighted content. Additionally, there were announcements about further interconnected transactions involving OpenAI and chip manufacturers like: Nvidia and AMD. This has led to the OpenAI frenzy, with total transaction volume surpassing $1 trillion just this year. While you can enjoy videos showcasing meticulously designed characters manipulated into digital puppets by uncreative, bigoted individuals, it also means that with OpenAI, you could lose your home in a disastrous financial collapse if the bubble bursts.
I don’t wish to offend the creators of Sora. I’ve strolled through art galleries and realized that if I were to deface an artwork with a ridiculous doodle, it would surprisingly add value; hence, if I didn’t want it, I wouldn’t have exposed it to the public. Moreover, none of the tech giants seem to lead a civilized life, so they probably cannot fathom any creative value worth preserving from being tarnished for profit. If you’ve followed Sam’s frequent reading lists, you’ll see it’s akin to the “Business Philosophy” section of a mediocre airport bookstore. This week, they mainly wanted to convey that Sora 2 is about being cool and fun. “Seeing your feed filled with memes about yourself isn’t as bizarre as you might think,” Sam assured us. So all is well! Though, I think it’s beneficial to note that while you’re inundated with simulated revenge content in a modern-day version of Byzantium, you’re also one of the most influential individuals globally profiting immensely from it. confuse “guardrail.”
I’ve heard people propose that OpenAI’s motto should be “It’s better to ask for forgiveness than permission,” but that misplaces the priority. Its real motto appears to be, “We do what we wish, and you simply deal with it.” Consider Altman’s recent political trajectory. “For those familiar with German history in the 1930s” Sam forewarned back in 2016, reflecting on Trump’s actions. It seems he has reconciled this concern in time to join. Donald Trump’s second inauguration. Perhaps, to extend his well-crafted analogy, it’s due to him being among the entrepreneurs welcomed into the Prime Minister’s office to claim their portion of the gains. “Thank you for being such a pro-business, pro-innovation president,” Sam effused to Trump at a recent White House dinner for tech executives. “It’s a refreshing change.” Unsurprisingly, the Trump administration has chosen to evade AI regulation entirely.
On the flip side, recall what Sam and his skeptical comrades stated earlier this year when it was suggested that the Chinese AI chatbot DeepSeek might have leveraged some of OpenAI’s work. His organization issued a concerned statement, asserting, “We are aware of and investigating indications that DeepSeek may have improperly extracted our models. We will provide further details as we learn more.” “We are taking proactive and assertive measures to safeguard our technology.” Interestingly, OpenAI appears to be the only entity on earth with the ability to combat AI theft.
This week, Hollywood talent agencies took the initiative to coax some form of temporary silence from Altman. I posted flannel—if not in riches, then certainly in striving to establish a “new kind of engagement” with those he has openly referred to as “rights holders.” Many of us remember just a short while ago, when rights holders held all the power. Those who possess rights. In other words, the hint lies within the terminology. However, Sam embodies the post-light era. The question arises: if he is bestowing creative rights, can we genuinely believe he’s not also conferring other types of rights?
OpenAI desires what all nurturing platforms ultimately aim for: users to remain within their realm indefinitely. It is clearly poised to become the new default homepage of the internet, much like Meta once was. Are childhood privacy catastrophes, election manipulation controversies, and child exploitation crises not far off?
Because, incredibly, we have already traversed this life cycle. But I suppose we must revisit it, right? Or more accurately, since Sam’s company is advancing at an unprecedented pace, we have already done it again. Initially, we admire the enigmatic engineer Pied Piper as a brilliant and unconventional altruist, only to later uncover that he is not as he appears and that his technology poses greater risks than we comprehended, leading to our failure to regulate it, rendering us the victims. In many ways, this mirrors a poor AI reinterpretation of a film we’ve already witnessed. If Altman’s model can learn, why can’t we?
Marina Hyde is a columnist for the Guardian
A year at Westminster: John Crace, Marina Hyde, and Pippa Crellard On Tuesday, December 2nd, Crace, Hyde, and Crellard will reflect on this remarkable year alongside special guests. It will be streamed live from the Barbican in London and available worldwide. Reserve your ticket here or on guardian live
Do you have thoughts on the subjects discussed in this article? Click here if you would like to send an email response of up to 300 words for publication in our email section.
OpenAI is dedicated to providing copyright holders with “greater control” over character generation following the recent release of the Sora 2 app, which has overwhelmed platforms with videos featuring copyrighted characters.
Sora 2, an AI-driven video creation tool, was launched last week by invitation only. This application enables users to produce short videos from text prompts. A review by the Guardian of the AI-generated content revealed instances of copyrighted characters from shows like SpongeBob SquarePants, South Park, Pokémon, and Rick and Morty.
According to the Wall Street Journal, prior to releasing Sora 2, OpenAI informed talent agencies and studios that they would need to opt out if they wished to prevent the unlicensed use of their material by video generators.
OpenAI stated that those who own Guardian content can utilize a “copyright dispute form” to report copyright violations, though individual artists and studios cannot opt out of blanket agreements. Varun Shetty, OpenAI’s Head of Media Partnerships, remarked:
OpenAI Sora 2 Generated Video 1
On Saturday, OpenAI CEO Sam Altman stated in a blog post that the company has received “feedback” from users, rights holders, and various groups, leading to modifications.
He mentioned that rights holders will gain more “detailed control” as well as enhanced options regarding how their likenesses can be used within the application.
“We’ve heard from numerous rights holders who are thrilled about this new form of ‘interactive fan fiction’ and are confident that this level of engagement will be beneficial for them; however, we want to ensure that they can specify the manner in which the characters are utilized.”
Altman noted that OpenAI will “work with rights holders to determine the way forward,” adding that certain “generation edge cases” will undergo scrutiny within the platform’s guidelines.
He emphasized that the company needs to find a sustainable revenue model from video generation and that user engagement is exceeding initial expectations. This could lead to compensating rights holders for the authorized use of their characters.
“Creating an accurate model requires some trial and error, but we plan to start soon,” Altman said. “Our aim is for this new type of engagement to be even more valuable than revenue sharing, and we hope it’s worth it for everyone involved.”
He remarked on the rapid evolution of the project, reminiscent of the early days of ChatGPT, acknowledging both successful decisions and mistakes made along the way.
Humanity, an artificial intelligence firm, has agreed to a $1.5 billion settlement in response to a class action lawsuit filed by the author of a specific book, who alleges that the company used a pirated copy of their work to train chatbots.
If a judge approves the landmark settlement on Monday, it could signify a significant shift in the ongoing legal conflict between AI companies and writers, visual artists, and other creative professionals who are raising concerns about copyright violations.
The company plans to compensate the author approximately $3,000 for each of the estimated 500,000 books involved in the settlement.
“This could be the largest copyright restoration we’ve seen,” stated Justin Nelson, the author’s attorney. “This marks a first in the era of AI.”
Authors Andrea Burtz, Charles Greber, and Kirk Wallace Johnson, who were litigated against last year, now represent a wider group of writers and publishers whose works were utilized to train the AI chatbot Claude.
In June, a federal judge issued a complex ruling stating that training AI chatbots on copyrighted books is not illegal. Unfortunately, Humanity acquired millions of books from copyright-infringing sources inadvertently.
Experts predict that if Humanity hadn’t settled, they would likely have lost the lawsuit as it was set to go to trial in December.
“We’re eager to see how this unfolds in the future,” commented William Long, a legal analyst at Wolters Kluwer.
U.S. District Judge William Alsup in San Francisco is scheduled to hear the terms of the settlement on Monday.
Why are books important to AI?
Books are crucial as they provide the critical data sources—essentially billions of words—needed to develop the large language models that power chatbots like Anthropic’s Claude and OpenAI’s ChatGPT.
Judge Alsup’s ruling revealed that Anthropic had downloaded over 7 million digitized books, many of which are believed to be pirated. The initial download included nearly 200,000 titles from an online library named Books3, created by researchers other than OpenAI to build a vast collection utilized for training ChatGPT.
Burtz’s debut thriller, The Lost Night, served as the lead plaintiff in this case and was also part of the Books3 dataset.
The ruling revealed that at least 5 million copies had been ingested from around 2 million instances found on Pirate websites like Library Genesis.
The Author Guild informed its thousands of members last month that it anticipated losses of at least $750 per work, which could potentially be much higher. A sizeable settlement award of about $3,000 per work could indicate a reduced pool of impacted titles after taking duplicates and non-copyrighted works into account.
On Friday, Author Guild CEO Mary Raysenberger stated that the settlement represents “a tremendous victory for authors, publishers, and rights holders, sending a strong message to the AI industry about the dangers of using pirated works to train AI at the expense of those who can’t afford it.”
Greetings and welcome to TechScape! After this newsletter goes live, you might find yourself captivated by the wedding snapshots of Jeff Bezos and Lauren Sanchez, the most glamorous pairing in the tech news sphere this year. I found the event to be both sticky and monumental. Although everyone attended, Charlize Theron wasn’t on the guest list; as I mentioned earlier: “We might be the only ones not invited to Bezos’ wedding, but that’s okay.”
AI Companies Begin to Prevail in Copyright Disputes
Recently, the tech sector achieved multiple victories regarding the usage of copyrighted materials for developing artificial intelligence products.
A noteworthy judgment from a U.S. judge concluded that the creator of human-like chatbots, which trained on books without securing author consent, did not breach copyright regulations. Judge William Allsup equated the act of using human writings to “readers aiming to become writers.”
The following day, a ruling favoring Meta emerged: U.S. District Judge Vince Chhabriain San Francisco concluded that the plaintiff did not provide adequate proof that AI technology from these firms would lead to “market dilution” by inundating it with similar works.
On that same day, while Meta gained a favorable ruling, a group of authors sued Microsoft, accusing the company of copyright infringement linked to its Megatron text generator. Given the rulings favoring Meta and the chatbot sector, authors are facing a challenging uphill battle.
These cases are minor skirmishes in a larger legal struggle surrounding copyrighted media. Just three weeks ago, Disney and NBCUniversal filed a lawsuit against Midi Joanie, claiming its AI image generator and upcoming video tools unlawfully utilized iconic characters like Darth Vader and the Simpsons. Meanwhile, major record labels—Sony, Universal, and Warner—sued AI music generator companies Suno and Udio. Additionally, ongoing cases from The New York Times target OpenAI and Microsoft.
This lawsuit marks the first over AI-generated text. As these rulings unfold, a pressing question arises: will determinations for one form of media extend to another?
John Strand, an IP and copyright attorney at Wolf Greenfield, stated, “The impact of copyrighted works on the market is increasingly vital in fair use analysis, and the book market has unique considerations compared to film.”
For Strand, the scenario concerning images seems to favor copyright holders since AI models are said to generate identical images based on their training data.
Even more startling revelations emerged from the AI verdicts. Companies have allegedly utilized 7 million pirated books to establish their AI training databases. To rectify this, they purchased physical copies, scanned them, and digitized the content. Unfortunately, the original owners of these 7 million physical books, which no longer have practical use, were left with destroyed copies. This operation involved buying the books, chopping them up, scanning their text, and then discarding them. According to ARS Technica, there are very few efficient methods for digitizing books, and they tend to be slow. The AI sector seems geared towards swift and disruptive approaches.
The destruction of millions of books illustrates the intense demand for content that AI companies require for their products.
Two stories I reported last week have seen significant developments shortly thereafter.
The Trump-branded mobile phone, known as “T1,” has replaced its “America” pledge with “proudly America” and “vibrantly in America.” According to Barge.
Trump seems to be mirroring Apple’s strategy. While Apple navigates the manufacturing origin issues, it spotlights the American aspect of the iPhone by branding it as “designed in California.” What’s left unsaid is its assembly in China or India, along with components sourced from various countries. Trump and his family appear to have adopted a similarly ambiguous tagline, although their original commitments seem far more glaring.
The descriptor “American Proud Design” now featured prominently on Trump’s site appears to be an obvious nod to Apple’s branding.
Adhering to the “Made in the USA” label carries real legal implications. Companies face litigation over how many products are genuinely produced within the country, and major U.S. trade regulators have set standards for what constitutes that slogan. However, tracing a smartphone’s manufacturing history to meet these criteria proves to be quite complex, according to many experts.
While Trump aims to bring manufacturing back to America with his steep tariffs, it seems he has learned the lessons that other mobile companies have grappled with. Manufacturing smartphones solely in the U.S. is fraught with complications and limitations, creating significant challenges for the final product.
Catch up on last week’s Gold Trump Phone newsletter.
…and Online Age Verification
Photo: Matt Cardy/Getty Images
Last week, I discussed Smatty, a porn platform, returning to France. This week, the U.S. Supreme Court ruled in favor of the age verification checks mandated in Texas. Pornhub has blocked access for Texas residents for much of the past two years, protesting much like they did in France for three weeks.
Justice Clarence Thomas summarized the court’s rationale:
“HB 1181 simply requires adults to verify their age before accessing adult explicit materials,” Thomas stated in the majority opinion, which passed with a 6-3 ruling. “This law furthers the state’s significant interest in protecting children from sexually explicit content and appropriately allows users to verify their age using established forms of government-issued identification and shared transaction information.”
Justice Elena Kagan, along with two other liberal justices, voiced their dissent.
The ruling validates Texas laws and laws from nearly 20 other states implementing online age checks. The global climate seems to be shifting away from granting broader access to pornographic content under the guise of free speech rights.
Experts suggest that the flexible definition of obscenity under Texas law necessitates age checks on platforms containing adult-oriented materials.
“Today is disheartening for advocates of an open internet,” remarked GS Hans, a professor at Cornell Law School. “While the courts may not categorize this decision as a landmark ruling in this case, it fundamentally alters free speech jurisprudence and could establish encroachments on adult access by endorsing limitations on minor indecency.
We’ll monitor the situation closely in July when Pornhub intends to implement age checks in line with the Online Services Act.
Read more: A UK survey indicates that 8% of children aged 8 to 14 have encountered online pornography.
Explore More AI News
This Week in AI: WhatsApp Introduces Summary Feature and Nobel-Winning Genome Model
Meta’s WhatsApp now showcases AI-generated summaries of unread messages. Photo: Martin Meissner/AP
This new feature may seem minor, but even slight modifications to the globe’s most used messaging app can create a significant impact. Meta’s WhatsApp now provides AI-generated summaries of unread messages. According to Barge.
Apple previously experimented with message summaries—but that venture didn’t succeed, leading them to retract the feature. For companies known for strategically controlled launches, dropping the summary was quite an embarrassment. The difference here lies in Meta’s consistent track record of releasing AI products over the years.
In more AI-related news, I seldom find new technology captivating, but Google’s DeepMind AI Lab’s recent announcement appears promising for the healthcare sector. The new Alphagenome AI aims to offer comprehensive predictions regarding how a single mutation in human DNA can impact multiple biological processes governing genes. The developers of the Alphagenome previously won the Chemistry Prize for Alphafold, a program known for predicting protein structures.
This innovation raises compelling questions, potentially overtaking CRISPR, the groundbreaking technique regarding changes in humans when their genetic sequences are adjusted. The Alphagenome holds promise in shedding light on this enigmatic issue.
The Danish government is taking action to curb the creation and distribution of AI-generated deepfakes by revising copyright laws, ensuring that individuals hold rights over their own bodies, facial features, and voices.
On Thursday, Danish officials announced they would strengthen protections against digital imitation of personal identities, marking what they believe to be the first such law in Europe.
With support from a broad coalition across political parties, the Ministry of Culture is set to propose amendments to the existing law for consultation before the summer break, with the intention of submitting the changes in the fall.
Deepfake technology is described as an exceedingly realistic digital representation of an individual, including their appearance and voice.
Danish Minister of Culture, Jacob Engel Schmidt, expressed his hopes that the proposed legislation will convey a “clear message” to Parliament.
He stated to the Guardian: “We collectively send a clear message that everyone has the right to their body, their voice, and their facial features.”
He continued: “Humans can exploit digital duplication techniques for various malicious purposes. I will not accept that.”
The initiative reportedly enjoys support from 9 out of 10 MPs, reflecting rapid advancements in AI technology which have made it simpler than ever to create convincing fake images, videos, or sounds that mimic others.
If passed, the changes to Danish copyright law would allow citizens to request the removal of content from online platforms that is shared without their consent.
Additionally, the law would regulate “realistic and digitally generated imitations” of artistic performances without consent, with violations potentially leading to compensation for affected individuals.
The government has clarified that the new regulations will not interfere with parody and satire, which will still be allowed.
“Certainly, this is a new foundation for us being dismantled, and we are prepared to take further actions if platforms do not comply,” Engel Schmidt remarked.
Other European nations are looking to follow Denmark’s example. He plans to utilize Denmark’s upcoming EU presidency to share the initiative with his fellow European leaders.
Should tech platforms fail to comply with the new law, they may face “significant fines,” which could escalate to a matter for the European Commission. “This is why I believe high-tech platforms will take this very seriously,” he added.
Mark Zuckerberg’s Meta has secured judicial backing in a copyright lawsuit initiated by a collective of authors this week, marking a second legal triumph for the American Artificial Intelligence Industry.
Prominent authors, including Sarah Silverman and Ta-Nehisi Coates, claimed that the owners of Facebook utilized their books without authorization to train AI systems, thereby violating copyright laws.
This ruling comes on the heels of a decision affirming that another major AI player, Humanity, did not infringe upon the authors’ copyrights.
In his ruling on the Meta case, US District Judge Vince Chhabria in San Francisco stated that the authors failed to present adequate evidence that the AI developed by tech companies would harm the market to establish an illegal infringement under US copyright law.
However, the judgment offered some encouragement to American creators who contended that training AI models without consent was unlawful.
Chhabria noted that using copyrighted material without permission for AI training is illegal in “many situations,” contrasting with another federal judge in San Francisco who recently concluded in a separate case that Humanity’s AI training constituted “fair use” of copyrighted works.
The fair use doctrine permits the utilization of copyrighted works under certain conditions without the copyright holder’s permission, which serves as a vital defense for high-tech firms.
“This ruling does not imply that Meta employs copyrighted content to train language models,” Chhabria remarked. “It merely indicates that these plaintiffs presented an incorrect argument and failed to establish a supportive record for their case.”
Humanity is also set to face further legal scrutiny this year after a judge determined that it had illegally utilized over 7 million books from the Central Library, infringing on the authors’ copyrights without fair use.
A representative for Boys Schiller Flexner, the law firm representing the authors against Meta, expressed disagreement with the judge’s ruling to favor Meta despite the “uncontroversial record” of the company’s “historically unprecedented copyright infringement.”
A spokesperson for Meta stated that the company valued the decision and characterized fair use as a “critical legal framework” for developing “transformative” AI technology.
In 2023, the authors filed a lawsuit against Meta, asserting that the company exploited unauthorized versions of their books to train the AI systems known as Llamas without consent or remuneration.
Copyright disputes are placing AI firms in opposition to publishers and creative sectors on both sides of the Atlantic. This tension arises because generative AI models, which form the foundation of powerful tools like ChatGPT chatbots, require extensive datasets to be trained, much of which is comprised of copyrighted material.
This lawsuit is part of a series of copyright cases filed by authors, media organizations, and other copyright holders against OpenAI, Microsoft, and companies like Humanity regarding AI training.
AI enterprises claim they are fairly using copyrighted materials to develop systems that create new and innovative content, while asserting that imposing copyright fees on them could threaten the burgeoning AI sector.
Copyright holders maintain that AI firms are unlawfully replicating their works and generating rival content that jeopardizes their livelihoods. Chhabria conveyed empathy toward this argument during the May hearing, reiterating it on Wednesday.
The judge remarked that generative AI could inundate the market with endless images, songs, articles, and books, requiring only a fraction of the time and creativity involved in traditional creation.
“Consequently, by training generative AI models with copyrighted works, companies frequently produce outputs that significantly undermine the market for those original works, thereby greatly diminishing the incentives for humans to create in the conventional manner,” stated Chhabria.
Disney and Universal have filed a lawsuit against an artificial intelligence company, claiming copyright violations. The entertainment titans have described the image generator behind Midi Johnny’s popular AI as a “bottomless pit of plagiarism,” alleging it replicates the studios’ most iconic characters.
The lawsuit, lodged in federal court in Los Angeles, accuses Midi Joan of illegally accessing two Hollywood studio libraries and creating numerous unauthorized copies of key characters, including Darth Vader from Star Wars, Elsa from Frozen, and Minions from Despicable Me. Midjourney has not yet commented on the matter.
This legal action from Disney and Universal marks a new chapter in the ongoing battle over copyright issues related to artificial intelligence, following prior lawsuits focusing on text and music. So far, these two companies are among the largest industry stakeholders to address the implications for images and videos.
“We are optimistic about the potential of AI technology when used responsibly to enhance human creativity; however, it’s crucial to recognize that piracy and copyright infringement carried out by AI companies is unacceptable,” stated a company representative.
Kim Harris, vice-chair and legal counsel at NBCUniversal, emphasized the need to “entertain and inspire while protecting the hard work of all artists who invest significantly in content.”
The studios assert that the San Francisco-based company, one of the pioneers in AI-driven image generation, must either cease infringing upon copyrighted works or implement technical measures to prevent the creation of AI-generated images of copied characters.
Nonetheless, studios claim that Midjourney continues to release updates to its AI image service, promoting high-quality infringing images. The AI is capable of recreating animated visuals based on user prompts. These companies train their models using vast datasets, often sourced from millions of websites.
In a 2022 interview with Forbes, Midjourney CEO David Holz mentioned that he built the company’s database through extensive “internet scraping.”
The lawsuit, initiated by seven entities holding the copyrights to various Disney and Universal Pictures Film Units, includes examples of AI-generated animations with Disney characters like Yoda wielding lightsabers, as well as universal characters such as the Dragon from Kung Fu Panda, Toothless, and Shrek.
“By leveraging plaintiffs’ copyrighted materials and distributing images (and soon videos) that unmistakably incorporate beloved characters from Disney and Universal, Midi Joan exemplifies a typical copyright-free rider, creating a bottomless pit of flexible liability,” the studios claim.
Disney and Universal are seeking a preliminary injunction to prevent Midjourney from continuing to copy their works or providing image and video generation services without protective measures against infringement, as well as unspecified damages.
Founded in 2021 by David Holz, Midjourney operates on a subscription model, boasting a revenue of $300 million from its services last year alone.
This isn’t the first instance of Midjourney facing accusations of leveraging artists’ works to train AI systems. Approximately a year ago, a federal judge in California found that 10 artists, alongside Stability AI and others, were in litigation against Midjourney, alleging that these companies had copied and stored their works on their servers, rendering them potentially liable for unauthorized use. This ruling allowed the lawsuit to proceed based on misuse of images, and it is currently ongoing.
This case is part of a larger trend of lawsuits involving authors, media organizations, and record labels against high-tech firms over the utilization of copyrighted materials for AI training.
When asked whether the company sought consent from artists whose works are copyrighted, Holz remarked, “It’s practically impossible to gather 100 million images and trace their origins.” In a submission to the UK government last year, OpenAI stated, “Training today’s leading AI models without the use of copyrighted materials is unfeasible.”
In late 2023, the New York Times filed a lawsuit against OpenAI, the developer of ChatGPT, along with Microsoft (which holds a 49% stake in the startup), for allegedly misusing and regenerating text from its articles. That suit is still pending. Other media outlets, including The Guardian, have negotiated licensing agreements with AI companies to use their archives. Similarly, authors have sued Meta, claiming it used a vast database of pirated books to train the LLaMA AI model, although many of those claims were dismissed.
In June 2024, major record companies filed lawsuits against two AI companies for copyright infringement. Sony Music Entertainment, Universal Music Group Recordings, and Warner Records accused Suno and Udio of improperly using millions of songs to create a system capable of generating derivative music.
The Minion character originates from films produced by Universal Pictures.
Movie/Aramie
Disney and Universal have initiated a lawsuit against the AI image generator Midjourney, alleging widespread copyright infringement that enables users to produce images that “explicitly incorporate and mimic well-known Disney and Universal characters.” This lawsuit could mark a significant shift in the ongoing legal discourse surrounding AI-related copyright issues faced by book publishers, news outlets, and other content creators.
The Midjourney tool, which generates images based on textual prompts, boasts around 20 million users on its Discord platform. Users provide their input for creation.
In the lawsuit, the two film production giants provide examples where Midjourney can generate images surprisingly similar to characters it does not own rights to, like the Disney-owned Minions and characters from The Lion King. They assert that these results stem from the AI being trained on their copyrighted materials. They also contend that Midjourney “disregarded” their attempts to resolve these issues before resorting to legal action.
The complaint states, “Midjourney is a classic copyright-free rider and an endless source of plagiarism.” Midjourney has not yet issued a response to New Scientist‘s request for comment.
The lawsuit is applauded by Ed Newton Rex, a nonprofit advocate for fairer training practices within AI companies. “This is a monumental day for creators globally,” he comments. “The government has displayed unsettling tendencies toward legalizing intellectual property theft, potentially yielding to the intense lobbying from Big Tech.
Newton-Rex alleges that Midjourney engineers previously justified their actions on the grounds that the art had become “ossified.” “Fortunately, this absurd defense is unlikely to hold up in court,” he adds.
Legal experts express candid perspectives on Midjourney’s likelihood of success. “It’s Disney; thus, Midjourney is in a precarious position, please excuse my bluntness,” remarks Andres Guadams from the University of Sussex, UK.
Guadams emphasizes Disney’s resolute approach to safeguarding its intellectual property—rarely, but effectively—underscoring the necessity of this intervention. The film studio took action several months following other entities, such as news publishers, in their pursuit against AI companies for the alleged unauthorized use of their creations. Many of those disputes were resolved through licensing agreements between the AI firms and copyright holders.
“Media conglomerates are excited about potential breaches. The models have improved to such an extent that they can effortlessly create characters that come to mind,” states Guadams. He believes Disney is biding its time because “unlike publishers, they’re not simply seeking licenses to survive.”
The involvement of these two media powerhouses signals a pivotal moment at the intersection of AI and copyright, according to Guadams. “The fact that they are targeting Midjourney sends a clear message,” he states. Midjourney specializes in image generation exclusively, making it relatively small compared to major AI corporations. “This acts as a warning to larger entities, urging them to implement stronger protective measures.”
While many major AI companies incorporate image-generating features in their chatbots, they tend to impose stricter controls on users’ abilities to produce images featuring copyrighted characters through considerable limitations.
Disney, which generated $91 billion in revenue last year, is not seeking to profit from Midjourney. “This could act as a call for negotiations. Since AI is not going away, Disney may be setting a precedent for future business interactions,” notes Guadams.
The London-based firm Stability AI, specializing in artificial intelligence, argues that the copyright lawsuit initiated by global photography agency Getty Images poses a significant “obvious threat” to the AI generation industry.
Stability AI contested Getty’s claims in the London High Court on Monday, which center on issues of copyright and trademark infringement regarding its extensive collection of photographic works.
Stability enables users to create images based on text prompts. Among its directors is James Cameron, the acclaimed director of Avatar and Titanic. In response, Getty criticized those training AI systems as “tech nerds,” suggesting they disregard the ramifications of their technological advancements.
Stability retorted by asserting that Getty is pursuing a “fantasy” legal path, investing around £10 million to challenge a technology it views as an “existential threat” to their operations.
Getty syndicates around 50,000 photographers’ work to clients across more than 200 countries. It alleges that Stability trained its image generation models using an extensive database of copyrighted photographs. Consequently, a program named Stability Diffusion continues to produce images bearing watermarks from Getty Images. Getty maintains that Stability is “completely indifferent” to the sources of their training data, asserting that the system “is associated with pornography-related trademarks” and generates “AI garbage.”
Getty’s legal representatives noted that the contention over the unauthorized utilization of thousands of photographs, including well-known images of celebrities, politicians, and news events, “is not a conflict between creativity and technology where a victory for Getty Images spells the end for AI.”
They further stated: “The issue arises when AI companies like Stability wish to use these materials without compensation.”
Lindsay Lane KC, representing Getty Images, commented, “These were a group of tech enthusiasts enthusiastic about AI, yet indifferent to the challenges and dangers it poses.”
In her court filing on Monday, Getty contended that Stability had trained an image generation model using a database that included child sexual abuse material.
Stability is contesting Getty’s claims overall, with its attorney characterizing the allegations regarding child sexual abuse material as “abhorrent.”
A spokesperson for Stability AI stated that the company is dedicated to ensuring its technology is not misused. It emphasized the implementation of strong safeguards “to enhance safety standards and protect against malicious actors.”
This situation arises in the context of a broader movement among artists, writers, and musicians—including figures like Elton John and Dua Lipa—who are advocating for copyright protection against alleged infringement by AI-generated content that allows users to produce new images, music, and text.
The UK Parliament is embroiled in a related issue, with the government proposing that copyright holders should have the option to opt-out of the material used for training algorithms and generating AI content.
“Of course, Getty Images acknowledges that the entire AI sector can be a formidable force, but that does not justify permitting the AI models they are developing to blatantly infringe on their intellectual property rights,” Lane stated.
The trial is expected to span several weeks and will address, in part, the use of images by renowned photographers. This includes a photograph of former Liverpool soccer manager Jürgen Klopp, captured by award-winning British sports photographer Andrew Livesey, a photo of the Chicago Cubs baseball team by American sports photographer Gregory Shams, and images of actor and musician Donald Glover by Alberto Rodriguez, as well as photographs of actor Eric Dane and film director Christopher Nolan.
The case brings forth 78,000 pages of evidence, with AI experts summoned to testify from the University of California, Berkeley, and the University of Freiberg in Germany.
The defiant peers have presented a significant challenge to the government. They urge artists to provide copyright protections for artificial intelligence companies, or they risk losing essential legal protections.
The government has encountered its fifth defeat in the House regarding a controversial initiative that would permit AI companies to train their models using copyrighted materials.
With a vote of 221-116 on Wednesday, they insisted on amendments that would enhance transparency regarding the materials used by AI companies for training their models.
At the awards event following the vote, Elton John emphasized that copyright protection is an “existential issue” for artists and called on the government to “do the right thing.”
He remarked: “We will not let the government forget their promise to support the creative industry. We will not retreat, and we will not go quietly. This is just the beginning.”
Wednesday night’s vote highlights the ongoing conflict between the Commons and the Lords over a data bill utilized by campaigners to challenge the government’s proposed copyright reforms.
Leading the opposition to the Lords’ changes is crossbench peer and film director Beeban Kidron, whose amendments consistently receive support from the upper chamber.
The data bill faces the likelihood of being shelved unless the Commons agrees to Kidron’s amendments or presents alternative solutions.
Maggie Jones, the minister for digital economy and online safety, urged her colleagues to vote against the Kidron amendment after the government proposed last-minute concessions to avoid another setback.
Before the vote, Jones stated that her colleagues “must decide whether to jeopardize the entire bill” and claimed that voting for Kidron’s amendment would “appear unprecedented”—attempting to disrupt a bill that does not undermine copyright law, while also addressing important issues like combating sexually explicit deepfake images.
Kidron told Piers: “This is the last chance to urge the government to implement meaningful solutions,” pressing the minister to take solid steps ensuring AI companies adhere to copyright regulations.
“It is unfair and irrational for the creative industry to suffer at the hands of those who take their jobs and assets. It’s not neutral.”
“We have repeatedly asked both houses: What is the government doing to protect creative jobs from being stolen? There has been no response.”
Several peers criticized the notion that the Lords’ actions were unprecedented, arguing that the government itself is breaking precedent by refusing to compromise. Tim Clement Jones, a Liberal Democrat spokesman for the digital economy, voiced strong support for Kidron’s amendments.
Beeban Kidron expressed concern, asking: Why is the government neglecting the interests of the UK while attempting to hand over the wealth and labor of the country? Photo: Curlcoat/Getty
The Lords’ amendments place the data bill in a state of double claims, indicating that both the Commons and the Lords are unable to agree on the legislation. Under this circumstance, the bill will be dropped unless ministers accept the rebellious revisions or offer other changes through parliamentary processes. Although the bill’s failure is uncommon, it has occurred before, notably in the 1997-98 session regarding the European Parliament election bill.
According to parliamentary tradition, the Commons holds a favorable position as the elected House, and in rare situations, if the Lords refuse to concede, the minister can utilize parliamentary law to enact the bill in the following session, which may significantly delay the legislation.
As a concession to the peers on Tuesday night, the government pledged to release additional technical reports on the future of AI and copyright regulations within nine months, rather than the previously proposed twelve.
“Many peers have expressed experiencing a lack of hearing during ping pong,” Jones noted in her letter.
Jones pointed out that by updating the Data Protection Act, the data bill is projected to yield £10 billion in economic benefits, enhancing online safety and strengthening the authority to require social media companies to retain data following a child’s death.
Kidron asserted: “It would be wise for the government to accept the amendment or propose something meaningful in its place. They have failed to listen to the Lords, to the creative sector, and even to their own supporters.”
Under the proposed government regulations, AI companies would be authorized to train their models using copyrighted works unless the owners specifically opt out. This plan has garnered heavy criticism from creators and publishers, including renowned artists such as Paul McCartney and Tom Stoppard.
Technology Secretary Peter Kyle expressed regret over the decision to initiate consultations regarding the opt-out system associated with changes to copyright laws as a “priority option,” indicating that there may be resistance within Downing Street to make more concessions.
Sir Elton John labeled the UK government an “absolute loser” over its proposal that would enable tech firms to utilize copyrighted material without authorization.
The renowned singer-songwriter described the alteration of copyright laws in favor of artificial intelligence companies as a “crime.”
In a Sunday interview with BBC One’s Laura Kuenssberg programme, John expressed that the government “has robbed the youth of their legacy and income,” adding, “I consider it a criminal act. The government is just an absolute loser, and I’m extremely upset about it.”
John referred to technical secretary Peter Kyle as “a little idiot,” stating that he would take legal action against the minister if the government does not revise its copyright strategies. Recently, Kyle faced criticism for being too aligned with Big Tech, following reports of increased meetings with companies like Google, Amazon, Apple, and Meta since Labour’s election victory last July.
Before casting his vote for a proposal from CrossbenchPiabe Bankidron, which mandated senators to disclose their use of copyrighted material to AI companies, John voiced his concerns.
He mentioned a similar amendment proposed last week, which is likely to be discarded by the Commons government in a parliamentary procedure that could jeopardize the data bill.
“I feel like a criminal in that I am profoundly betrayed. The Senate’s vote was 2-1 in our favor. Yet the government appears to think, ‘Well, old man… I can manage it as I wish,'” John stated.
The government is currently reviewing proposals that would permit AI companies to train their models (a technology that underpins products like chatbots) using copyrighted work without obtaining permission. A source close to Kyle indicated that this option is no longer favored in consultations, but it remains under consideration.
Alternative options include maintaining the status quo, requiring AI companies to acquire licenses for using copyrighted content, or allowing AI companies to exploit copyrighted works without creative professionals having a say.
A government spokesman remarked, “We will not entertain copyright modifications unless we are fully assured they benefit creators. The spokesman further noted that the government’s recent commitment to conducting an economic impact assessment of the proposal will investigate “a broad array of issues and options across all aspects of the discussion.”
hGreetings from Ello and TechScape! Radio stations and television presenters can enhance their writing by considering their delivery methods. I’m your host, Blake Montgomery. In today’s Tech News: Discussions arise regarding labor automation within the US healthcare sector, as conflicts escalate with the use of drones in India and Pakistan, both of which are armed with nuclear weapons. But first, let’s explore the evolving battle over AI and copyright in the UK and the US.
“Daring and Unprecedented Power Shift”
The UK is embroiled in intense discussions about compensating artists for using their copyrighted works in developing generative AI technologies. The Senate convened on Monday to determine whether tech companies are utilizing copyrighted materials without permission.
Insights from my colleagues Dan Millmo and Rafael Boyd:
The UK government faces challenges in the House of Representatives over its attempt to let AI firms use copyrighted works without consent.
Despite government objections, an amendment to the data bill urging AI companies to disclose which copyrighted content is being utilized received support from peers.
While this proposal is under consultation in the current year’s report, critics are leveraging the data bill to voice their disapproval.
The government’s primary proposal would permit AI companies to use copyrighted works without obtaining permissions, a stance critics denounce as impractical unless copyright holders explicitly indicate their non-usage.
Read the complete article on Monday’s vote here.
Conversely, in the US, discussions have taken a more chaotic turn. Over the weekend, Donald Trump dismissed the US Copyright Director. CBS News reported this incident. Shira Perlmutter was let go after publishing a report questioning the growing demands for AI firms to bypass existing copyright laws.
New York Democratic leader Joe Morell specifically pointed to Trump’s ally, Elon Musk, as a driving force behind this dismissal. She declined to rubber stamp Musk’s initiatives to exploit copyrighted works for training AI models.
The abrupt termination of Trump’s copyright chief brings to mind the tale of the Gordian knot. Legend has it that Alexander the Great encountered a complex knot tying a cart to a pole. Numerous attempts to untie it failed, but Alexander, with a simple sword stroke, solved the dilemma. The narrative illustrates how innovative thinking can lead to triumph. Alexander dismantled the dilemma, leaving the original problem unresolved. Perhaps the true lesson lies beyond just securing the cart, but that’s a topic for another time.
While Trump may have circumvented the challenging legal issues presented by the Copyright Office, the vacuum at the top means that influential players will likely exploit copyright regulations to their advantage. This may align with the president’s intentions. Well-capitalized AI firms appear poised to dominate copyright litigation, while they simultaneously advocate for fair compensation for artists’ creativity. Their alliance with Trump signals a shift towards a more favorable regulatory climate, as illustrated by the recent dismissal of the copyright chief. Numerous lawsuits bear witness to AI companies quietly leveraging copyrighted materials without proper permissions, prompting actions from both plaintiffs and defendants.
Trump Offers Blockchain Access
Donald Trump at the White House in Washington, DC on Monday. Photo: Nathan Howard/Reuters
My colleague, Nick Robbins, covers the contest where Trump promises to engage directly with his cryptocurrency investors.
On Monday, the top 220 investors in Donald Trump-backed cryptocurrency were granted exclusive dinner invitations with the president as a reward for their financial contributions. This culminated months of promotions, raising concerns that he is leveraging his political power to benefit his family’s business while exposing himself to foreign interests.
The cryptocurrency, dubbed $Trump, launched in mid-January and has garnered a market cap exceeding $2 billion following significant investor interest. Most of the tokens are held by companies associated with Trump’s family. As reported by Reuters.
“Congratulations! If you’re among the top 220, expect communication within the next 24 hours. Please check your inbox (including spam folders) for your invitation to dine with President Trump,” his website stated on Monday. “We look forward to seeing you at the gala dinner in Washington, DC on May 22nd.”
Democrats, ethics watchdogs, and the SEC have expressed concerns regarding Trump’s crypto ventures, highlighting corruption allegations. The dinner contest raises ethical issues, equating the opportunity for direct access to the president with a bidding war.
Drones Surge along the India-Pakistan Border
Residents inspect damaged homes in Pakistan-controlled Neelam valley in Kashmir on Monday. Photo: Muzammil Ahmed/AFP/Getty Images
Though India and Pakistan have achieved a fragile ceasefire, the recent four-day conflict between these rivals exemplifies an escalating trend.
New York Times reports that Pakistan has claimed India is deploying Turkish-made drones for assaults. India, on the other hand, alleged Pakistan mobilized 300-400 drones for attacks on 36 sites on the night of May 8th, stating they shot down approximately 70 drones launched from India.
The term “drone” encompasses two distinct concepts: small quadcopters operated remotely and larger semi-autonomous vehicles managed from military command centers. Unfortunately, this English vernacular misses the mark. For countries like India, Pakistan, and Ukraine, smaller unmanned aircraft have become significant weaponry.
The Ukraine-Russia conflict underscores the rapid expansion of drone usage. The explosive quadcopter, featuring first-person viewing, wreaked havoc during landmark assaults, including attacks on the Kremlin in May 2023.
Can Automation Solve the US Healthcare Worker Shortage?
Nurses operating a new automated dose assembly machine in Columbus, Ohio. Photo: Doral Chenoweth/The Columbus Dispatch by USA Today Network
One of the major concerns of our era is the potential for machines to largely replace human labor. Recently, the Guardian covered Zing, a robot designed to distribute methadone, a medication for opioid addiction that has surged in the US over the years. This story raises critical questions: Where should we draw the line between automation that genuinely assists workers and a profit-driven preference for robotic over human labor?
Click here for all stories on robotic medication delivery.
Walgreens has announced an expansion of its Microfilling Center services, incorporating robots for prescription dispensing and a hub dedicated to packaging chronic illness medications. As reported by CNBC, these automated centers process around 16 million prescriptions monthly, accounting for 40% of Walgreens’ prescriptions. The company aims to increase the number of locations utilizing these centers to 5,000 by year-end, up from 4,800 in February. Walgreens asserts that the shift to automation initiated in 2021 has already saved them $500 million over four years.
Pharmacy technicians are grappling with issues similar to those faced by nurses distributing methadone (including low wages, high pressure, and turnover), yet on a much larger scale. Walgreens operates approximately 12,500 stores across the US, Europe, and Latin America, with a valuation near $9.7 billion and a workforce of 312,000.
In 2023, Walgreens pharmacy staff staged strikes nationwide to protest working conditions. The central issues included chronic staffing shortages and burnout among those who remained. They branded the protest “Pharmaheadon.”
Although Walgreens may reduce pharmacy job openings due to automation and outsourcing functions to microfilling centers, it’s likely that many of these positions were not filled to begin with, creating hazardous working environments. Automation could help address the workforce shortages, mirroring potential developments in methadone clinics nationwide.
Walgreens Corporate claims that automation is easing worker challenges, allowing personnel more opportunities for personal interaction with patients. Reportedly, there’s been a 40% rise in vaccine distributions facilitated by automated prescription systems.
Learn more about labor automation in another sector here.
Numerous prominent figures and organizations from the UK’s creative sector, such as Coldplay, Paul McCartney, Dua Lipa, Ian McKellen, and the Royal Shakespeare Company, have called on the Prime Minister to safeguard artists’ copyright rather than cater to Big Tech’s interests.
In an open letter addressed to Keir Starmer, many notable artists express that their creative livelihoods are at risk. This concern arises from ongoing discussions regarding a government initiative that would permit artificial intelligence companies to utilize copyrighted works without consent.
The letter characterizes copyright as the “lifeline” of their profession, cautioning in a highlighted message that the proposed legislative change may jeopardize the UK’s status as a key player in the creative industry.
“Catering to a select few dominant foreign tech firms risks undermining our growth potential, as it threatens our future income, our position as a creative leader, and diminishes the value and legal standards we hold dear,” the letter asserts.
The letter encourages the government to accept amendments to the data bill suggested by crossbench peers and prominent advocate Beavan Kidron. Kidron, who spearheaded the artists’ letter, is advocating for changes that would necessitate AI firms to disclose the copyrighted works they incorporate into their models.
A united call to lawmakers across the political spectrum in both houses is made to push for reform: “We urge you to vote in favor of the UK’s creative sector. Supporting our creators is crucial for future generations. Our creations are not for your appropriation.”
With representation spanning music, theater, film, literature, art, and media, over 400 signatories include notable names like Elton John, the Isiglo River, Annie Lennox, Rachel Whitehead, Janet Winterson, the National Theatre, and the News Media Association.
The proposed Kidron amendment is set for Senate voting on Monday, yet the government has already declared its opposition, asserting that the current consultation process is adequate for discussing modifications to copyright law aimed at protecting creators’ rights.
Under current government proposals, AI companies are permitted to utilize copyrighted materials without authorization unless copyright holders actively “opt out” by demonstrating their refusal to allow their work to be utilized without proper compensation.
Giles Martin, a music producer and son of Beatles producer George Martin, mentioned to the Guardian that the opt-out proposal may be impractical for emerging artists.
“When Paul McCartney wrote ‘Yesterday’, his first thought was about ‘how to record this,’ not ‘how to prevent people from stealing it,'” Martin remarked.
Kidron pointed out that the letter’s signatories are advocating to secure a positive future for the upcoming generation of creators and innovators.
Supporters of the Kidron Amendment argue that this change will ensure that creatives receive fair compensation for the use of their work in training AI models through licensing agreements.
Generation AI models refer to the technology powering robust tools like ChatGPT and SUNO music creation tools, which require extensive data training to produce outputs. The primary sources of this data encompass online platforms, including Wikipedia, YouTube, newspaper articles, and digital book archives.
The government has introduced an amendment to the data bill that will commit to conducting economic impact assessments regarding the proposal. A source close to technology secretary Peter Kyle indicated to the Guardian that the opt-out system is no longer his preferred approach.
The official site is evaluating four options. The other three alternatives to the “opt-out” scenario include requiring AI companies to obtain licenses for using copyrighted works and enabling AI firms to utilize such works without creators or individuals needing to opt out.
A spokesperson for the government stated: “Uncertainty surrounding the copyright framework is hindering the growth of the AI and creative sectors. This cannot continue, but it’s evident that changes will not be considered unless they thoroughly benefit creators.”
The minister proposed concessions regarding copyright modifications to address the concerns of artists and creators ahead of a crucial vote in Congress next week, according to the Guardian.
The government is dedicated to conducting economic impact assessments for the proposed copyright changes and releasing reports on matters like data accessibility for AI developers.
These concessions aim to alleviate worries among Congress members and the creative sector regarding the government’s planned reforms to copyright regulations.
Prominent artists such as Paul McCartney and Tom Stoppard have rallied behind a campaign opposing a range of high-profile intervention changes. Elton John remarked that the reforms “will expand traditional copyright laws that safeguard artists’ livelihoods.”
The Minister intends to permit AI companies to utilize copyrighted works for model training without acquiring permission, unless the copyright holder opts out. Creatives argue this favors AI firms and expresses a desire to adhere to existing copyright laws.
The government’s pledge will be reflected in amendments to the data bill, which will serve as a key instrument for advocates opposing the proposed changes and is scheduled to be discussed in the Commons next Wednesday.
The initiative has already faced criticism. Crossbench peer and activist Beevan Kidron stated that the minister’s amendments would not “meet the moment” and indicated that the Liberal Democrats would propose their revisions to compel AI companies to comply with current copyright laws.
British composer Ed Newton Rex, a notable opponent of the government’s proposal, argued there is “extensive evidence” suggesting that the changes “are detrimental for creators.” He added that no impact assessment was needed to convey this.
Ahead of next week’s vote, Science and Technology Secretary Peter Kyle sought to establish rapport within the creative community.
During a meeting with music industry stakeholders this week, Kyle acknowledged that his focus on engaging with the tech sector has frustrated creatives. He faced backlash after holding over 20 meetings with tech representatives but none with those from the creative sector.
Kyle further stirred criticism by stating at the conference that AI companies might choose to relocate to countries like Saudi Arabia unless the UK revamps its copyright framework. This was not discussed at a Downing Street meeting with MPs this week.
Government insiders assert that AI firms are already based abroad and emphasize that if the UK does not reform its laws, creatives may lack avenues to challenge the exploitation of materials by overseas companies.
According to government sources, the minister has not established an opt-out system and maintains “a much broader and more open-minded perspective.”
However, Labour lawmakers contend that the minister “has not proven any substantial job growth in return” and is yielding to American interests. They criticize this as, at best, outsourcing and, at worst, total exploitation.
Kidron, who has successfully amended the Lords’ data bill while opposing the government’s reforms, remarked, “The moment is not right for pushing the issue into the long grass with reports and reviews.”
“I ask the government why they neglect to protect UK property rights, fail to recognize the growth potential of UK creative industries, and ignore British AI companies that express concerns over favoritism towards firms based in China,” she stated.
James Fris, a Labour member of the Culture, Media and Sports Selection Committee who facilitated discussions on the matter this month, asserted, “The mission of the creative sector cannot equate to submission to the tech industry.”
Kidron’s amendments, aimed at making AI companies accountable under UK copyright laws regardless of location, were withdrawn in the Commons, but the Liberal Democrats plan to reintroduce them next week.
The Liberal Democrats’ proposal includes a requirement for AI model developers (the technology that supports AI systems like chatbots) to adhere to UK copyright laws and clarify the copyrighted materials incorporated during development.
The Liberal Democrat amendment also demands transparency regarding the web crawlers used by AI companies, referring to the technology that gathers data from the Internet for AI models.
Victoria Collins, spokesperson for Liberal Democrat Technology, stated:
“Next week in the Commons, we will work to prevent AI copyright laws from being diluted and push Parliament to urge lawmakers to stand with us in support of UK creators.”
In New York, 12 US copyright lawsuits against Openai and Microsoft have been consolidated, with authors and news outlets suing the companies for centralization.
According to a Transfer order from the U.S. Judicial Commission on Multi-District Litigation, centralization can help coordinate findings, streamline pretrial litigation, and eliminate inconsistent rulings.
Prominent authors like Ta-Nehisi Coates, Michael Chabon, Junot Díaz, and comedian Sarah Silverman brought the incident to California, but it will now be moved to New York to join news outlets such as The New York Times. Other authors like John Grisham, George Sounders, Jonathan Franzen, and Jody Picoll are also involved in the lawsuits.
Although most plaintiffs opposed the merger, the transfer order addresses factual questions related to allegations that Openai and Microsoft used copyrighted works without consent to train large-scale language models (LLM) for AI products like Openai’s ChatGPT and Microsoft’s copylot.
Openai initially proposed consolidating the cases in Northern California, but the Judiciary Committee moved them to the Southern District of New York for the convenience of parties and witnesses and to ensure a fair and efficient conduct of the case.
High-tech companies argue that using copyrighted works to train AI falls under the doctrine of “fair use,” but many plaintiffs, including authors and news outlets, believe otherwise.
An Openai spokesperson welcomed the development, stating that they train on publicly available data to support innovation. On the other hand, a lawyer representing Daily News looks forward to proving in court that Microsoft and Openai have infringed on their copyrights.
Some of the authors suing Openai have also filed suits against meta for copyright infringement in AI model training. Court filings in January revealed allegations against Meta CEO Mark Zuckerberg for approving the use of copyrighted materials in AI training.
Amazon recently announced a new Kindle feature called “Recaps” that uses AI to generate summaries of books for readers. While the company sees it as a convenience for readers, some users have raised concerns about the accuracy of AI-generated summaries.
The UK government is addressing peer and labor concerns about copyright proposals, and companies are being urged to assess the economic impact of their AI plans.
A group of more than 30 British performing arts leaders, including executives from the National Theatre, Opera North, and Royal Albert Hall, have expressed concerns over the government’s proposal to allow artists to use their work without permission.
In a joint statement, they emphasized that performing arts organizations rely on a delicate balance of freelancers who depend on copyright to sustain their livelihoods. They urged the government to uphold the “moral and economic rights” of the creative community encompassing music, dance, drama, and opera.
Signatories to the statement include top leaders from institutions such as Saddlers Wells Dance Theatre, Royal Shakespeare Company, Birmingham Symphony Orchestra, and Leeds Playhouse.
They expressed concern over the government’s plan to diminish creative copyright by granting exemptions to AI companies. The statement highlighted the reliance of highly skilled creative workers on copyright and the potential negative impact on their livelihoods.
While embracing technological advancements, they warned that the government’s plans could hinder their participation in AI development. They called for automatic rights for creative professionals and criticized proposals that require copyright holders to opt out.
Additionally, they demanded transparency from AI companies regarding the copyrighted material they use in their models and how it was obtained. The government’s proposed transparency requirements in copyright consultations were noted.
The statement emphasized the importance of music, drama, dance, and opera to human joy and highlighted the backlash against the government’s proposals from prominent figures in the creative industry.
The controversy revolves around AI models that power tools like ChatGpt chatbots, trained using vast amounts of data from the open web. A government spokesperson defended the new approach, aiming to balance the interests of AI developers and rights holders.
Openai, the artificial intelligence company behind ChatGPT, has introduced video generation tools in the UK, highlighting the growing connection between the tech sector and the creative industry in relation to copyright.
Film director Beevan Kidron spoke out about the release of Sora in the UK, noting its impact on the ongoing copyright debate.
Openai, based in San Francisco, has made SORA accessible to UK users who are subscribed to ChatGPT. The tool surprised filmmakers upon its release last year. A halt in studio expansion was triggered by concerns from TV mogul Tyler Perry, who believed the tool could replace physical sets or locations. It was initially launched in the US in December.
Users can utilize SORA to generate videos by inputting simple prompts like requesting scenes of people walking through “beautiful snowy Tokyo City.”
Openai has now introduced SORA in the UK, with reported cases of artists using the tool in the UK and mainland Europe, where it was also released on Friday. One user, Josephine Miller, a 25-year-old British digital artist, created a video using SORA featuring a model adorned in bioluminescent fauna, praising the tool for opening up opportunities for young creatives.
'Biolume': Josephine Miller uses Openai's Sora to create stunning footage – Video
Despite the launch of SORA, Kidron emphasized the significance of the ongoing UK copyright and AI discussions, particularly in light of government proposals permitting AI companies to train their models using copyrighted content.
Kidron raised concerns about the ethical use of copyrighted material to train SORA, pointing out potential violations of terms and conditions if unauthorized content is used. She stressed the importance of upholding copyright laws in the development of AI technologies.
Recent statements from YouTube indicated that using copyrighted material without proper licensing for training AI models like SORA could lead to legal repercussions. The concern remains about the origin and legality of the datasets used to train these AI tools.
The Guardian reported that policymakers are exploring options for offering copyright concessions to certain creative sectors, further highlighting the complex interplay between AI, technology, and copyright laws.
Sora allows users to craft videos ranging from 5 to 20 seconds, with an option to create longer videos. Users can choose from various aesthetic styles like “film noir” and “balloon world” for their clips.
The MP’s two cross-party committees are urging the government to prioritize ensuring fair rewards for creators for their creative work and to facilitate the training of artificial intelligence models.
Lawmakers are advocating for more transparency in the data used to train generative AI models and urging the government not to implement plans that require creators to opt out of using such data.
The government’s proposed solution to the AI-copyright law tension includes exceptions for AI companies to train models with copyrighted work under “text and data mining,” while providing creators the option to opt out of the “rights reserve” system.
Caroline Dinage, chairman of the Culture, Media and Sports Committee, expressed concern over the response of the creative industry to the proposal, highlighting the threat to artists’ hard-earned success from unauthorized use of their work.
She emphasized the importance of fair treatment for creators and the need for transparency in data used to train AI models to ensure proper rewards for their work.
The Culture, Media, Sports Commission, Science, Science, Innovation and Technology Commission responded to government consultations on AI and copyright after a joint evidence session with representatives from AI startups and creative industries.
Letter to the Minister will enhance government transparency about training data, protect opt-out copyright holders, and empower consumers to make informed choices about AI models.
Failure to address these issues could disproportionately impact smaller creators and journalists operating under financial constraints, according to the letter.
Concerns among celebrities and the creative industry about government AI proposals have led to protests, with musicians releasing silent albums in protest.
The letter also highlighted the need for transparency in training data for AI models, citing examples from the EU and California which have introduced requirements for detailed technical records on training data.
The government is considering revenue-sharing models for AI developers to address copyright concerns and is urged to conduct full impact assessments on proposed options.
The letter cautioned against AI developers moving to jurisdictions with more lenient rules and emphasized the need for compliance, enforcement, and remedies for copyright issues.
EU copyright law architects assert the necessity of the law to safeguard writers, musicians, and creatives left vulnerable by the “irresponsible” legal gap in the EU’s artificial intelligence legislation.
This intervention occurred as 15 cultural organizations penned a letter to the European Commission, highlighting a draft rule under the AI Act that cautioned about copyright protections being compromised and a concerning legal loophole being exploited.
Axel Voss, a member of the European Parliament, emphasized that the 2019 copyright directive was not designed to address generative AI models, raising concerns about the unintended consequences of the law.
The introduction of ChatGpt, an AI chatbot capable of generating content like essays and jokes, has brought attention to the urgent need for copyright protections in light of the rapid advancements in AI technology and their impact on creative works.
Issues arising from the EU AI legislation negotiations have highlighted the challenges of securing strong copyright safeguards to protect creative content, with concerns surrounding the legal gap that favors Big Tech over European creatives.
The debate around AI and copyright law intensifies as generative AI models like ChatGpt and Dall-E become more widely used, leading to legal disputes over copyright infringement and the ethical implications of using AI to produce creative content.
The lack of enforceable rights for authors and creators in the AI law framework has raised alarms among cultural organizations and industry stakeholders, prompting calls for greater transparency and accountability in the use of AI technologies.
As the European Commission considers the future of AI regulation and its implications for copyright protection, the need for robust measures to safeguard the rights of creatives and uphold the integrity of their work remains a top priority.
Microsoft has issued a response to a copyright infringement lawsuit filed by The New York Times, alleging that its content was used to train generative artificial intelligence. Microsoft called the claims a false narrative of “apocalyptic futurology” and criticized the lawsuit as short-sighted, comparing it to Hollywood’s resistance to VCRs.
In a motion to dismiss filed as part of the lawsuit, Microsoft responded to the allegations, stating that The New York Times’ content was given “particular weight” and that Microsoft has made significant investments in the Times. Microsoft ridiculed the claims made by the newspaper and denied the accusations of government involvement in the matter.
The lawsuit, which could have far-reaching implications for artificial intelligence and news content production, accuses Microsoft, as the largest investor in OpenAI, of using copyrighted content from The New York Times to develop AI products that threaten the newspaper’s ability to provide its services.
Microsoft argued that the lawsuit is reminiscent of Hollywood’s opposition to VCRs in the past and emphasized that the content used to train the language models does not replace the market for the original work but rather educates the models.
OpenAI, a co-defendant in the lawsuit, has requested the dismissal of certain claims against the company, asserting that their products, such as ChatGPT, are not intended to replace subscriptions to The New York Times and are not used for that purpose in the real world.
Following Microsoft’s legal response, The New York Times pushed back against the comparison to 1980s home-taping technology, stating that Microsoft collaborated with OpenAI to copy copyrighted works without permission.
The dispute between the parties is part of a larger legal battle over copyright issues related to AI technology and concerns about the creation of misleading information. Recent incidents, such as Google’s use of AI to generate historically inaccurate images, have raised concerns about the need to address these issues.
OpenAI has faced criticism for its training methods and refusal to disclose training data, including the use of copyrighted works. The company argues that limiting training data to public domain content would hinder the development of AI systems that meet current needs.
OpenAI CEO Sam Altman expressed surprise at the Times lawsuit, stating that the AI models do not rely on specific publisher data for training and that the Times’ content represented only a small portion of the overall text corpus used.
Lawsuits have been brought against OpenAI and Microsoft by news publishers, alleging that their generative artificial intelligence products violate copyright laws by illegally using journalists’ copyrighted works. The Intercept, Raw Story, and Alternet filed suit in federal court in Manhattan, seeking compensation for the infringement.
Media outlets claim that OpenAI and Microsoft plagiarized copyrighted articles to develop ChatGPT, a prominent generative AI tool. They argue that ChatGPT ignores copyright, lacks proper attribution, and fails to alert users when using journalists’ copyrighted work to generate responses.
Raw Story and AlterNet CEO John Byrne stated, “Raw Story believes that news organizations must challenge OpenAI for breaking copyright laws and profiting from journalists’ hard work.” They emphasized the importance of diverse news outlets and the negative impact of unchecked violations on the industry.
The Intercept’s lawsuit names OpenAI and Microsoft as defendants, while the joint lawsuit by Raw Story and AlterNet focuses solely on OpenAI. The complaints are similar, with all three media outlets represented by the law firm Loevy & Loevy.
Byrne clarified that the lawsuits from Raw Story and AlterNet do not involve Microsoft directly but stem from a partnership with MSN. Both OpenAI and Microsoft have yet to comment on the allegations.
The lawsuits accuse the defendants of using copyrighted material to train ChatGPT without proper attribution, violating the Digital Millennium Copyright Act. The legal action is part of a series of lawsuits against OpenAI for alleged copyright infringement.
Concerns in the media industry about generative AI competing with traditional publishers have led to a wave of legal battles. The fear is that AI-generated content will erode advertising revenue and undermine the quality of online news.
While some news organizations have sued OpenAI, others like Axel Springer have opted to collaborate by providing access to copyrighted material in exchange for financial rewards. The lawsuits seek damages and profits, with the New York Times lawsuit aiming for significant monetary compensation.
This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.
Strictly Necessary Cookies
Strictly Necessary Cookie should be enabled at all times so that we can save your preferences for cookie settings.