Meta Prevails in AI Copyright Lawsuits as US Ruling Favors Company Over Authors

Mark Zuckerberg’s Meta has secured judicial backing in a copyright lawsuit initiated by a collective of authors this week, marking a second legal triumph for the American Artificial Intelligence Industry.

Prominent authors, including Sarah Silverman and Ta-Nehisi Coates, claimed that the owners of Facebook utilized their books without authorization to train AI systems, thereby violating copyright laws.

This ruling comes on the heels of a decision affirming that another major AI player, Humanity, did not infringe upon the authors’ copyrights.

In his ruling on the Meta case, US District Judge Vince Chhabria in San Francisco stated that the authors failed to present adequate evidence that the AI developed by tech companies would harm the market to establish an illegal infringement under US copyright law.

However, the judgment offered some encouragement to American creators who contended that training AI models without consent was unlawful.

Chhabria noted that using copyrighted material without permission for AI training is illegal in “many situations,” contrasting with another federal judge in San Francisco who recently concluded in a separate case that Humanity’s AI training constituted “fair use” of copyrighted works.

The fair use doctrine permits the utilization of copyrighted works under certain conditions without the copyright holder’s permission, which serves as a vital defense for high-tech firms.

“This ruling does not imply that Meta employs copyrighted content to train language models,” Chhabria remarked. “It merely indicates that these plaintiffs presented an incorrect argument and failed to establish a supportive record for their case.”

Humanity is also set to face further legal scrutiny this year after a judge determined that it had illegally utilized over 7 million books from the Central Library, infringing on the authors’ copyrights without fair use.

A representative for Boys Schiller Flexner, the law firm representing the authors against Meta, expressed disagreement with the judge’s ruling to favor Meta despite the “uncontroversial record” of the company’s “historically unprecedented copyright infringement.”

A spokesperson for Meta stated that the company valued the decision and characterized fair use as a “critical legal framework” for developing “transformative” AI technology.

In 2023, the authors filed a lawsuit against Meta, asserting that the company exploited unauthorized versions of their books to train the AI systems known as Llamas without consent or remuneration.

Copyright disputes are placing AI firms in opposition to publishers and creative sectors on both sides of the Atlantic. This tension arises because generative AI models, which form the foundation of powerful tools like ChatGPT chatbots, require extensive datasets to be trained, much of which is comprised of copyrighted material.

Skip past newsletter promotions

This lawsuit is part of a series of copyright cases filed by authors, media organizations, and other copyright holders against OpenAI, Microsoft, and companies like Humanity regarding AI training.

AI enterprises claim they are fairly using copyrighted materials to develop systems that create new and innovative content, while asserting that imposing copyright fees on them could threaten the burgeoning AI sector.

Copyright holders maintain that AI firms are unlawfully replicating their works and generating rival content that jeopardizes their livelihoods. Chhabria conveyed empathy toward this argument during the May hearing, reiterating it on Wednesday.

The judge remarked that generative AI could inundate the market with endless images, songs, articles, and books, requiring only a fraction of the time and creativity involved in traditional creation.

“Consequently, by training generative AI models with copyrighted works, companies frequently produce outputs that significantly undermine the market for those original works, thereby greatly diminishing the incentives for humans to create in the conventional manner,” stated Chhabria.

Source: www.theguardian.com

US Authorities Reportedly Considering Breaking Up Google After Ruling of Illegal Monopoly

The Justice Department is weighing various options, including the breakup of Alphabet Inc.’s Google, with a reported market capitalization of approximately $2 trillion, following a court ruling that tech giants monopolized the online search market illegally. The New York Times and Bloomberg News.

According to reports, one of the potential remedies frequently discussed by Justice Department lawyers is the sale of the Android operating system.

Authorities are also reportedly exploring options such as forcing the sale of Google’s search advertising program, AdWords, and its Chrome web browser.

A spokesman for the Justice Department stated that they are assessing the court’s decision and will determine the appropriate next steps in compliance with the court’s directives and applicable antitrust laws.

No decision has been made yet, as per a spokesman, and Google declined to comment. Google intends to appeal the ruling and faces a separate antitrust trial filed by the Department of Justice next month.

Other potential measures being considered by the Justice Department include mandating Google to share data with competitors and implementing safeguards to prevent unfair advantages with its AI products, according to sources familiar with the matter.

In the recent trial outcome, it was revealed that Google had paid over $26 billion in 2021 to secure agreements with companies like Apple to maintain its search engine as the default option on Safari, leading to monopoly allegations and anti-competitive practices, as ruled by the judge.

Following the judge’s ruling, rival search engine DuckDuckGo proposed banning exclusive agreements of this nature.

The ruling, issued last week, found Google in violation of antitrust laws and spending billions to establish an illegal monopoly that cemented its position as the global default search engine. This ruling marks a significant win for federal regulators challenging the dominance of tech giants in the market.

Skip Newsletter Promotions

In the last four years, federal antitrust regulators have sued Meta Platforms Inc., Amazon.com Inc., and Apple Inc. for allegedly maintaining monopolies unlawfully.

In 2004, Microsoft reached a settlement with the Department of Justice over claims that it compelled Windows users to use its Internet Explorer web browser.

Source: www.theguardian.com

What are the implications of a US judge’s ruling that Google has engaged in illegal monopolistic behavior?

Google was found to have created an illegal monopoly in online search and advertising by a federal court in a landmark antitrust lawsuit brought against it by the Department of Justice. This ruling will significantly impact Google’s operations and how people engage with the internet’s most popular websites.

The court specifically concluded that Google violated antitrust laws through exclusive agreements with device manufacturers like Apple and Samsung, paying them billions to ensure that Google products were the default search engine on their devices. These agreements allowed Google to establish a search monopoly and stifle competition unfairly.

The implications of this ruling will depend on what actions are taken next. It could lead to substantial changes in how Google conducts its business or potentially be weakened through the appeals process. The outcome will also have broader implications for how regulators address big tech companies and alleged monopolies.


Here’s what to expect following this decision.

The U.S. v. Google ruling did not specify remedies for Google’s monopoly on internet search, and the Justice Department did not seek penalties in its lawsuit. A separate trial will determine the remedies the government may impose on Google, which could range from contractual adjustments to a potential breakup of the company.

Judge Mehta could rule that Google is prohibited from entering exclusive search agreements, allowing it to be the default search engine if chosen by device manufacturers without the need for costly payments. Apple and Samsung have yet to comment on the ruling. Mozilla, reliant on Google payments, could face significant financial impact.

Judge Mehta may also consider options like browser choice screens seen in Europe to enhance competition. A harsher ruling could mandate the separation of Google’s search service from the rest of its operations and impose fines on antitrust violations.

Google intends to appeal the decision

Google rejected the court’s ruling and plans to appeal, initiating a legal battle with the Justice Department that could delay any repercussions for the company. Throughout the trial, Google maintained its superior product argument.


Past legal precedent suggests that a large technology company like Google may challenge an antitrust ruling successfully. Microsoft, in a similar case, managed to overturn key aspects of an antitrust decision against it through appeals.

Google has not disclosed its appeal timeline or response strategy following the ruling.

New Antitrust Lawsuit Looms

In addition to the current case, Google faces a forthcoming antitrust lawsuit concerning its digital advertising practices, alleging monopolistic behavior and stifling competition in that area.

This second lawsuit targets Google’s dominant position in the digital advertising industry, threatening a substantial revenue stream for the company. Google refutes the allegations and views the legal action as an attempt to gain unfair advantages.

The lawsuit is set for trial in September 2023.

Source: www.theguardian.com

UK Case Ruling Prohibits Sex Offenders from Utilizing AI Tools

A convicted sex offender who created over 1,000 indecent images of children has been forbidden from using any “AI creation tools” for the next five years, marking a significant case in this realm.

Anthony Dover, 48, was instructed by a British court in February not to use artificial intelligence-generated tools without prior police authorization, as part of a sexual harm prevention order issued in February.

The prohibition extends to tools like text-image generators that produce realistic-looking photos from written commands, as well as the manipulation of websites used to generate explicit “deepfake” content.

Mr. Dover, who received a community order and a £200 fine, was specifically directed not to utilize the Stable Diffusion software known to be exploited by pedophiles to create surreal child sexual abuse material.

This case is part of a series of prosecutions where AI-generated images have come to the forefront, prompting warnings from charities regarding the proliferation of such images of sexual abuse.

Last week, the government announced the creation of a new crime that makes it illegal to produce sexually explicit deepfakes of individuals over 18 without their consent, with severe penalties for offenders.

Using synthetic child sexual abuse material, whether real or AI-generated, has been illegal under laws since the 1990s, leading to recent prosecutions involving lifelike images produced using tools like Photoshop.

These tools are increasingly being used to combat the dangers posed by sophisticated synthetic content, as evidenced by recent court cases involving the distribution of such images.

The Internet Watch Foundation (IWF) emphasized the urgent need to address the production of AI-generated child sexual abuse images, warning about the rise of such content and its chilling realism.

Law enforcement agencies and charities are working to tackle this growing trend of AI-generated images, with concerns rising about the production of deepfake content and the impact on victims.

Skip past newsletter promotions

Efforts are underway to address the growing concern over AI-generated images and deepfake content, with calls for technology companies to prevent the creation and distribution of such harmful material.

The decision to restrict adult sex offenders from using AI tools may pave the way for increased surveillance of those convicted of indecent image offenses, highlighting the need for proactive measures to safeguard against future violations.

While restrictions on internet use for sex offenders have existed, limitations on AI tools have not been common, underscoring the gravity of this case and its implications for future legal actions.

The company behind Stable Diffusion, Stability AI, has taken steps to prevent abuse of their software, emphasizing the importance of responsible technology use and compliance with legal guidelines.

Source: www.theguardian.com

Elon Musk files lawsuit against OpenAI, seeks court ruling on artificial general intelligence

Elon Musk is concerned about the pace of AI development

Chesnot/Getty Images

Elon Musk asked the court to resolve the issue of whether GPT-4 is artificial general intelligence (AGI). Lawsuit against OpenAI. The development of his AGI, which can perform a variety of tasks just like humans, is one of the field’s main goals, but experts say it will be up to judges to decide whether it qualifies for GPT-4. The idea is “unrealistic,” he said.

Musk was one of the founders of OpenAI in 2015, but left the company in February 2018 due to controversy over the company’s change from a nonprofit model to a profit-restricted model. Despite this, he continues to support OpenAI financially, with the legal complaint alleging that he donated more than $44 million to OpenAI between 2016 and 2020.

Since OpenAI’s flagship ChatGPT launched in November 2022 and the company partnered with Microsoft, Musk has warned that AI development is moving too fast, but with the latest AI model to power ChatGPT, Musk has warned that AI development is moving too fast. The release of GPT-4 made that view even worse. In July 2023, he founded xAI, a competitor of OpenAI.

In a lawsuit filed in a California court on March 1st, Musk said through his lawyer, “A judicial determination that GPT-4 constitutes artificial general intelligence and is therefore outside the scope of OpenAI’s license to Microsoft.” I asked for This is because OpenAI is committed to only licensing “pre-AGI” technology. Musk has a number of other demands, including financial compensation for his role in helping found OpenAI.

However, it is unlikely that Mr. Musk will prevail. Not only because of the merits of litigation, but also because of the complexity in determining when AGI is achieved. “AGI doesn’t have an accepted definition, it’s kind of a coined term, so I think it’s unrealistic in a general sense,” he says. mike cook At King’s College London.

“Whether OpenAI has achieved AGI is hotly debated among those who base their decisions on scientific facts.” Elke Beuten De Montfort University, Leicester, UK. “It seems unusual to me that a court can establish scientific truth.”

However, such a judgment is not legally impossible. “We’ve seen all sorts of ridiculous definitions come out of US court decisions. How can anyone but the most outlandish of her AGI supporters be persuaded? Not at all.” Staffordshire, England says Katherine Frick of the university.

It’s unclear what Musk hopes to achieve with the lawsuit – new scientist has reached out to both him and OpenAI for comment, but has not yet received a response from either.

Regardless of the rationale behind it, this lawsuit puts OpenAI in an unenviable position. CEO Sam Altman said the company will use his AGI issued a stark warning that the company’s powerful technology needs to be regulated.

“It’s in OpenAI’s interest to constantly hint that their tools are improving and getting closer to this, because it keeps the attention and the headlines flowing,” Cook says. But now they may need to make the opposite argument.

Even if the court were to rely on expert viewpoints, any judge would have a hard time ruling in Musk’s favor at best, or uncovering differing views on the hotly debated topic. will have a hard time. “Most of the scientific community would now say that AGI has not been achieved if the concept was considered sufficiently meaningful or sufficiently accurate,” says Beuten.

topic:

Source: www.newscientist.com