Disney stated that its AI image generator Midjourney was developed using films like ‘The Lion King’
Maximum Film/Alamy
Since the launch of ChatGPT, OpenAI’s generative AI chatbot, three years ago, we’ve witnessed dramatic shifts across various aspects of our lives. However, one area that remains unchanged is adherence to copyright law. We still strive to uphold pre-AI standards.
It’s widely recognized that leading AI firms have developed models by harvesting data from the internet, including copyrighted content, often without securing prior approval. This year, prominent copyright holders have retaliated, filing various lawsuits against AI companies for alleged copyright violations.
The most notable lawsuit was initiated in June by Disney and Universal, claiming that the AI image generation platform Midjourney was trained using their copyrighted materials and enabled users to produce images that “clearly included and replicated Disney and Universal’s iconic characters.”
The proceedings are still underway, with Midjourney’s recent response in August asserting, “The limited monopoly granted by copyright must yield to fair use,” suggesting that the outcome would be transformative, permitting AI companies to educate models with copyrighted works.
Midjourney’s statements highlight that the copyright debate is more complex than it might seem at first glance. “Many believed copyright would serve as the ultimate barrier against AI, but that’s not entirely true,” remarks Andres Guadams from the University of Sussex, UK, expressing surprise at how little impact copyright has had on the progress of AI enterprises.
This is occurring even as some governments engage in discussions on the matter. In October, the Japanese government made an official appeal to OpenAI, urging the company behind the Sora 2 AI video generator to honor the intellectual property rights of its culture, including its manga and beloved video games like those from Nintendo.
Sora 2 is embroiled in further controversy due to its capability to generate realistic footage of real individuals. OpenAI recently tightened restrictions on representations of Martin Luther King Jr. after family representatives raised concerns about a depiction of his iconic “I Have a Dream” speech that included inappropriate sounds.
“While free speech is crucial when portraying historical figures, OpenAI believes that public figures and their families should ultimately control how their likenesses are represented,” the company stated. This restriction was only partially effective, as celebrities and public figures must still opt-out from having their images utilized in Sora 2. Some argue this remains too permissive. “No one should have to tell OpenAI if they wish to avoid being deepfaked,” states Ed Newton Rex, a former AI executive and founder of the campaign group Fairly Trained.
In certain instances, AI companies face legal challenges over their practices, as highlighted by one of the largest proposed lawsuits from the past year. In September, three authors accused Anthropic, the firm behind the Claude chatbot, of deliberately downloading over 7 million pirated books for training its AI models.
A judge reviewed the case and concluded that even if the firm had utilized this material for training, it could be considered a sufficiently “transformational” use that wouldn’t inherently infringe copyright. However, the piracy allegations were serious enough to warrant trial proceedings. Anthropic ultimately decided to settle the lawsuit for at least $1.5 billion.
“Significantly, AI companies appear to be strategizing their responses and may end up disbursing a mix of settlements and licensing deals,” Guadams noted. “Only a small number of companies are likely to collapse due to copyright infringement lawsuits,” he adds. “AI is here to stay, even if many established players may fail due to litigation and market fluctuations.”
The online gaming platform Roblox is set to restrict interactions between children and adults, as well as older teenagers, starting next month. This decision comes in light of a new lawsuit that alleges the platform has been exploited by predators to groom children as young as seven.
Roblox, known for popular games like “Grow a Garden” and “Steal a Brainrot,” boasts 150 million daily players. However, it now faces legal action claiming that its system design facilitates the predation of minors.
Beginning next month, a facial age estimation feature will be implemented, allowing children to communicate with strangers only if they are within a certain age range.
Roblox claims it will be the first gaming or communication platform to enforce age verification for chats. Similar measures were enacted in the UK this summer for adult sites, ensuring that under-18s cannot access explicit content.
The company likened its new approach to the age structures found in schools, differentiating elementary, middle, and high school levels. The initiative will be launched first in Australia, New Zealand, and the Netherlands, where children will be prohibited from having private conversations with unknown adults starting next month, with a global rollout planned for early January.
Users will be classified into categories: under 9, 9-12, 13-15, 16-17, 18-20, or 21 and older. Children will only be allowed to chat with peers in their age group or a similar age range. For instance, a child whose age is estimated at 12 can only interact with users under 16. Roblox stated that any images or videos used during the age verification process will not be stored.
“We view this as a means to enhance user confidence in their conversations within the game,” stated Matt Kaufman, Roblox’s chief safety officer. “We see it as a genuine chance to foster trust in our platform and among our community.”
This lawsuit emerges alongside growing concerns from family attorneys regarding the “systematic predation of minors” on Roblox. Florida attorney Matt Dolman mentioned that he has filed 28 lawsuits against Roblox, which has rapidly expanded during the pandemic, asserting that “the primary allegations pertain to the systematic exploitation of minors.”
One of the more recent lawsuits, filed in U.S. District Court in Nevada, involves the family of a 13-year-old girl who claims that Roblox conducted its operations “recklessly and deceptively,” facilitating her sexual exploitation.
The alleged incident involved a ‘dangerous child predator’ who posed as a child, developed an emotional connection, and manipulated the girl into providing her phone number and engaging in graphic exchanges. The manipulator then coerced her into sending explicit photos and videos.
The lawsuit claims that had Roblox implemented user screening measures prior to allowing access, the girl “would not have encountered the numerous predators that litter the platform,” and if age and identity checks had been conducted, the abuse could have been prevented.
Other recent cases in the Northern District of California include a 7-year-old girl from Philadelphia and a 12-year-old girl from Texas, both of whom were reportedly groomed and sent explicit materials by predators on Roblox.
“We are profoundly concerned about any situation that places our users at risk,” a Roblox spokesperson remarked. “The safety of our community is our highest priority.”
“This is why our policies are intentionally more stringent than those on many other platforms,” they added. “We have filters aimed at protecting younger users, prohibit image sharing, and restrict sharing personal information.
“While no system is flawless, we are continually striving to enhance our safety features and platform restrictions, having launched 145 new initiatives this year to assure parents that we prioritize their children’s safety online.”
“One platform’s safety standards alone aren’t sufficient; we genuinely hope others in the industry will adopt some of the practices we’re implementing to ensure robust protections for children and teens across the board,” Kaufman commented.
Bevan Kidron, UK founder of the 5Rights Foundation, advocating for children’s digital rights, stated: “It’s imperative for game companies to prioritize their responsibility toward children within their services.
“Roblox’s announcement asserts that their forthcoming measures will represent best practices in this sector, but it is a bold statement from a company that has historically been slow to tackle predatory behavior and granted unverified adults and older children easy access to millions of young users. We sincerely hope they are correct.”
Epic Games, the creator of Fortnite, has come to a “comprehensive settlement” with Google, which may mark the end of a legal dispute lasting five years regarding Google’s Play Store for Android applications, as stated in joint legal filings by both parties.
Tim Sweeney, CEO of Epic, hailed the settlement as a “fantastic offer” in a post on social media.
In documents submitted on Tuesday to the federal court in San Francisco, both Google and Epic Games noted that the settlement “enables the parties to set aside their differences while fostering a more dynamic and competitive Android environment for users and developers.”
Epic secured a significant legal victory over Google earlier this summer when a federal appeals court upheld a jury’s verdict declaring the Android app store an illegal monopoly. The unanimous decision opens the door for federal judges to potentially mandate substantial restructuring to enhance consumer choices.
While the specific settlement terms remain confidential and require approval from U.S. District Judge James Donato, both companies provided an overview of the agreement in their joint filing. A public hearing is set for Thursday.
The settlement appears to align closely with the October 2024 ruling by Donato, which directed Google to dismantle barriers that protect the Android app store from competition. It also includes a provision requiring the company’s app stores to support the distribution of competing third-party app stores, allowing users to download apps freely.
Google had aimed to reverse these decisions through appeal, but the ruling from the 9th Circuit Court of Appeals in July posed a significant challenge to the tech giant, which is now facing three separate antitrust cases that could impact various aspects of its internet operations.
In 2020, Epic Games launched a lawsuit against both Google’s Play Store and Apple’s iPhone App Store, seeking to bypass proprietary payment processing systems that impose fees ranging from 15% to 30% on in-app transactions. The proposed settlement put forth on Tuesday aims to decrease those fees to a range between 9% and 20%, depending on the specific agreement.
OpenAI declared on Tuesday that it has officially transformed its core business into a for-profit entity, concluding a lengthy and challenging legal dispute.
Delaware Attorney General Kathy Jennings, an essential regulatory figure, announced her approval of a plan for the startup, initially established as a nonprofit in 2015, to transition into a public benefit corporation. This type of for-profit organization highlights a commitment to societal betterment.
The company also revealed that it has restructured its ownership and inked a new agreement with its long-time supporter, Microsoft. The arrangement will provide the tech giant with about a 27% stake in OpenAI’s new commercial venture, altering some specifics of their close partnership. According to the deal, OpenAI is valued at $500 billion, making Microsoft’s stake worth over $100 billion.
This restructuring allows the creators of ChatGPT to raise funds more easily and profit from AI technology while remaining under the nominal oversight of the original nonprofit.
Jennings stated in a release that she does not oppose the proposal, marking the end of over a year of discussions and announcements regarding the oversight of OpenAI’s governance and the influence commercial investors and their nonprofit board will exert over the organization’s technology. The attorney generals of Delaware, where OpenAI is incorporated, and California, where its headquarters are located, both indicated they were investigating the proposed alterations.
OpenAI confirmed it completed the reorganization “after almost a year of productive discussions” with authorities in both states.
“OpenAI has finalized a recapitalization and streamlined its corporate framework,” Brett Taylor, chairman of the OpenAI board, stated in a blog post on Tuesday.
Elon Musk, one of the co-founders of OpenAI and a former ally of Mr. Altman, had contested the transition through a lawsuit, which he later dropped, then refiled, and made an unexpected bid of nearly $100 billion to take control of the startup.
“Nonprofits will continue to oversee for-profit corporations and now have direct access to essential resources before AGI arrives,” Taylor noted.
AGI, or artificial general intelligence, is defined by OpenAI as “a highly autonomous system that surpasses humans at the most economically significant tasks.” OpenAI was founded as a nonprofit in 2015 with the goal of safely creating AGI for the betterment of humanity.
Previously, OpenAI stated that its own board would determine when AGI would be achieved, effectively ending its partnership with Microsoft. However, now “Once AGI is announced by OpenAI, this declaration will be confirmed by an independent panel of experts,” and Microsoft’s rights to OpenAI’s proprietary research methodologies will “persist until the panel of experts confirms the AGI or until 2030, whichever occurs first.” Microsoft also retains commercial rights to certain “post-AGI” products from OpenAI.
Microsoft also released a related statement on Tuesday regarding the revised partnership, but opted not to provide additional comments.
The nonprofit will be rebranded as the OpenAI Foundation, and Taylor mentioned it will allocate $25 billion in grants for health and disease treatment and to safeguard against AI-related cybersecurity threats. He did not specify the timeline for disbursing these funds.
Robert Wiseman, co-director of the nonprofit organization Public Citizen, remarked that this setup does not ensure autonomy for nonprofits, comparing them to corporate foundations that cater to the interests of for-profit entities.
Wiseman stated that while a nonprofit’s board may formally retain oversight, “control is illusory because there is no evidence that the nonprofit has enforced its values on the for-profit.”
LThat evening, I was scrolling through dating apps when a profile caught my eye: “Henry VIII, 34 years old, King of England, non-monogamous.” Before I knew it, I found myself in a candlelit bar sharing a martini with the most notorious dater of the 16th century.
But the night wasn’t finished yet. Next, we took turns DJing alongside Princess Diana. “The crowd is primed for the drop!” she shouted over the music as she placed her headphones on. As I chilled in the cold waiting for Black Friday deals, Karl Marx philosophized about why 60% off is so irresistible.
In Sora 2, if you can imagine it—even if you think you shouldn’t—you can likely see it. Launched in October as an invite-only app in the US and Canada, OpenAI’s video app hit 1 million downloads within just five days, surpassing the initial success of ChatGPT.
AI-generated deepfake video features portraits of Henry VIII and Kobe Bryant
While Sora isn’t the only AI tool producing videos from text, its popularity stems from two major factors. First, it simplifies the process for users to star in their own deepfake videos. After entering a prompt, a 10-second clip is generated in minutes, which can be shared on Sora’s TikTok-style platform or exported elsewhere. Unlike low-quality, mass-produced “AI slop” that clouds the internet, these videos exhibit unexpectedly high production quality.
The second reason for Sora’s popularity is its ability to generate portraits of celebrities, athletes, and politicians—provided they are deceased. Living individuals must give consent for their likenesses to be used, but “historical figures” seem to be defined as famous people who are no longer alive.
This is how most users have utilized the app since its launch. The main feed appears to be a bizarre mix of absurdity featuring historical figures. From Adolf Hitler in a shampoo commercial to Queen Elizabeth II stumbling off a pub table while cursing, the content is surreal. Abraham Lincoln beams at the TV exclaiming, “You’re not my father.” The Reverend Martin Luther King Jr. expresses his dream of having all drinks be complimentary before abruptly grabbing a cold drink and cursing.
However, not everyone is amused.
“It’s profoundly disrespectful to see my father’s image—who devoted his life to truth—used in such an insensitive manner,” Malcolm told the Washington Post. She was just two when her dad was assassinated. Now, Sora’s clips show the civil rights leader engaged in crude humor.
Zelda Williams, the daughter of actor Robin Williams, urged people to “stop” sending AI videos of her father through an Instagram post. “It’s silly and a waste of energy. Trust me, that’s not what he would have wanted,” she noted. Before his passing in 2014, he took legal steps to prevent his likeness from being used in advertising or digitally inserted into films until 2039. “Seeing my father’s legacy turned into something grotesque by TikTok artists is infuriating,” she added.
The video featuring the likeness of the late comedian George Carlin has been described by his daughter Kelly Carlin as “overwhelming and depressing” in a Blue Sky post.
Recent fatalities are also being represented. The app is filled with clips depicting Stephen Hawking enduring a “#powerslap” that knocks his wheelchair over, Kobe Bryant dunking over an elderly woman while yelling about something stuck inside him, and Amy Winehouse wandering the streets of Manhattan with mascara streaming down her face.
Those who have passed in the last two years (Ozzy Osbourne, Matthew Perry, Liam Payne) seem to be missing, indicating they may fall into a different category.
Each time these “puppetmasters” revive the dead, they risk reshaping the narrative of history, according to AI expert Henry Ajdar. “People are worried that a world filled with this type of content could distort how these individuals are remembered,” he explains.
Sora’s algorithm favors content that shocks. One of the trending videos features Dr. King making monkey noises during his iconic “I Have a Dream” speech. Another depicts Kobe Bryant reenacting the tragic helicopter crash that claimed both his and his daughter’s lives.
While actors and comedians sometimes portray characters after death, legal protections are stricter. Film studios bear the responsibility for their content. OpenAI does not assume the same liability for what appears on Sora. In certain states, consent from the estate administrator is required to feature an individual for commercial usage.
“We couldn’t resurrect Christopher Lee for a horror movie, so why can OpenAI resurrect him for countless short films?” questions James Grimmelman, an internet law expert at Cornell University and Cornell Tech.
OpenAI’s decision to place deceased personas into the public sphere raises distressing questions about the rights of the departed in the era of generative AI.
Legal Issues
It may feel unsettling to have the likeness of a prominent figure persistently haunting Sora, but is it legal? Perspectives vary.
Major legal questions regarding the internet remain unanswered. Are AI firms protected under Section 230 and thus not liable for third-party content on their platforms? If OpenAI qualifies for Section 230 immunity, users cannot sue the company for content they create on Sora.
“However, without federal legislation on this front, uncertainties will linger until the Supreme Court takes up the issue, which might stretch over the next two to four years,” notes Ashken Kazarian, a specialist in First Amendment and technology policy.
OpenAI CEO Sam Altman speaks at Snowflake Summit 2025 on June 2 in San Francisco, California. He is one of the living individuals who permitted Sora to utilize his likeness. Photo: Justin Sullivan/Getty Images
In the interim, OpenAI must circumvent legal challenges by obtaining consent from living individuals. US defamation laws protect living people from defamatory statements that could damage their reputation. Many states have right-of-publicity laws that prevent using someone’s voice, persona, or likeness for “commercial” or “misleading” reasons without their approval.
Allowing the deceased to be represented this way is a way for the company to “test the waters,” Kazarian suggests.
Though the deceased lack defamation protections, posthumous publicity rights exist in states like New York, California, and Tennessee. Navigating these laws in the context of AI remains a “gray area,” as there is no established case law, according to Grimmelman.
For a legal claim to succeed, estates will need to prove OpenAI’s responsibility, potentially by arguing that the platform encourages the creation of content involving deceased individuals.
Grimmelmann points out that Sora’s homepage features videos that actively promote this style of content. If the app utilizes large datasets of historical material, plaintiffs could argue it predisposes users to recreate such figures.
Conversely, OpenAI might argue that Sora is primarily for entertainment. Each video is marked with a watermark to prevent it from being misleading or classified as commercial content.
Generative AI researcher Bo Bergstedt emphasizes that most users are merely experimenting, not looking to profit.
“People engage with it as a form of entertainment, finding ridiculous content to collect likes,” he states. Even if this may distress families, it might abide by advertising regulations.
However, if a Sora user creates well-received clips featuring historical figures, builds a following, and begins monetizing, they could face legal repercussions. Alexios Mantsalis, director of Cornell Tech’s Security, Trust, and Safety Initiative, warns that the “financial implications of AI” may include indirect profit from these platforms. Sola’s rising “AI influencers” could encounter lawsuits from estates if they gain financially from the deceased.
“Whack-a-Mole” Approach
In response to the growing criticism, OpenAI recently announced that representatives of “recently deceased” celebrities can request their likenesses be removed from Sora’s videos.
“While there’s a significant interest in free expression depicting historical figures, we believe public figures and their families should control how their likenesses are represented,” a spokesperson for OpenAI stated.
The parameters for “recent” have yet to be clarified, and OpenAI hasn’t provided details on how these requests will be managed. The Guardian received no immediate comment from the company.
The copyright-free-for-all strategy faced challenges after controversial content, such as “Nazi SpongeBob SquarePants,” circulated online and the Motion Picture Association of America accused OpenAI of copyright infringement. A week post-launch, the company transitioned to an opt-in model for rights holders.
Grimmelmann hopes for a similar adaptation in how depictions of the deceased are handled. “Expecting individuals to opt out may not be feasible; it’s a harsh expectation. If I think that way, so will others, including judges,” he remarks.
Bergstedt likens this to a “whack-a-mole” methodology for safeguards, likely to persist until federal courts establish AI liability standards.
According to Ajdel, the Sola debate hints at a broader question we will all confront: Who will control our likenesses in this age of composition?
“It’s a troubling scenario if people accept they can be used and exploited in AI-generated hyper-realistic content.”
Over 20 states have filed a lawsuit against the Environmental Protection Agency (EPA), contesting the agency’s decision to terminate a $7 billion initiative designed to enhance access to solar power for low-income households.
The initiative, known as “Solar For All,” was launched in 2022 as part of the Inflation Reduction Act, which allocated subsidies for building rooftop and community solar projects. This action was part of the Biden administration’s commitment to decreasing carbon emissions and aimed to make solar energy available to around 1 million additional American households.
However, in August, the EPA announced the program’s cancellation, with states withdrawing approximately 90% of the grant funds from the awarded accounts, according to the legal complaint.
The EPA has been working to reinstate clean energy funding sanctioned by the Biden administration. This new lawsuit will assess whether the agency overstepped its bounds in this instance. The states involved in the legal challenge had expected the funding to boost solar power availability, lower greenhouse gas emissions from energy production, and decrease energy costs.
“Congress established a solar energy program to make electricity more affordable, but the administration is ignoring the law, focusing instead on conspiracy theories about climate change,” Washington Attorney General Nick Brown stated in a news release. The EPA’s action “places about $156 million in jeopardy” for Washington state, as mentioned in the release.
Earlier this month, a coalition of nonprofit organizations and solar installers lodged a complaint, which resembles a similar lawsuit against the program’s cancellation.
When asked about the recent lawsuit, the White House referred NBC News to the EPA, which typically remains silent on ongoing litigation.
The states involved in the lawsuits are all governed by Democratic officials. Notably, Washington, Arizona, and Minnesota are leading this legal action, which was filed in the Western District of Washington.
The lawsuit contends that the EPA “illegally and unilaterally terminated” the program, breaching the Administrative Procedure Act that regulates federal agencies’ operations. It also claims that the EPA overstepped its “constitutional authority” by attempting to revoke programs and funds approved by Congress.
This latest suit is part of a dual strategy employed by states to counteract the Trump administration’s cuts to clean energy initiatives established under President Joe Biden.
On Wednesday, another group, including states and state energy agencies, filed a separate complaint in the U.S. Court of Federal Claims regarding the cancellation of individual subsidy agreements.
The lawsuit argues that the EPA’s retraction of funds violated distinct subsidy contracts with states and state energy authorities.
It further claims the EPA relied on a “false and malicious interpretation” of the One Big Beautiful Bill, which was enacted during the Trump administration, to support its actions.
While acknowledging that the law granted the administration certain powers to retract Inflation Control Act funds, the complaint asserts that this authority only extended to funds not yet distributed to grant recipients.
A third lawsuit was filed this month in Rhode Island District Court. Solar companies, homeowners, nonprofits, and labor unions are making similar claims. It contends that the EPA’s actions could deny nearly 1 million people access to affordable solar energy and jeopardize “hundreds of thousands of good-paying, high-quality jobs.”
Amazon faced a US government lawsuit on Monday, where it was accused of employing deceptive methods to enroll millions in its Prime subscription service, making cancellation nearly impossible.
A complaint from the Federal Trade Commission (FTC), filed in June 2023, alleges that Amazon deliberately used a “dark pattern” design to mislead consumers into subscribing to a $139 Prime service during checkout.
According to the complaint, “For years, Amazon has intentionally and subconsciously enrolled millions of consumers in the Amazon Prime service.”
The case pivots on two primary claims: that Amazon registered customers without their clear consent through a confusing checkout process, and that it established a convoluted cancellation system dubbed “Illid.”
Judge John Chun presided over the case in federal court in Seattle. He is also overseeing another FTC case accusing Amazon of operating an illegal monopoly.
This lawsuit is part of a broader initiative, with multiple lawsuits against major tech companies in a bipartisan bid to rein in the influence of US tech giants after years of governmental inaction.
Allegedly, Amazon was aware of the extensive non-consensual Prime registrations but resisted modifications that would lessen these sign-ups due to their adverse effect on company revenue.
The FTC claims that Amazon’s checkout process forced customers to navigate a confusing interface designed with prominent buttons, effectively hiding the option to decline while signing up. Crucial information regarding Prime pricing and automatic updates was often concealed or presented in fine print, forming a core part of Amazon’s business model.
Additionally, the lawsuit scrutinizes Amazon’s cancellation procedure, which the FTC describes as a complicated “maze” involving 4 pages and 6 clicks.
The FTC seeks financial penalties, monetary relief, and permanent injunctions to mandate changes in Amazon’s practices.
In its defense, Amazon argues that the FTC is overreaching its legal boundaries and asserts that it has made improvements to its registration and cancellation processes, dismissing the allegations as outdated.
The trial is anticipated to last around four weeks, relying heavily on internal Amazon communications and documents, as well as testimonies from company executives and expert witnesses.
Should the FTC prevail, Amazon could face significant financial repercussions and may be required to reform its subscription practices under court supervision.
Appearance: The demeanor of individuals wearing glasses, impeccably dressed, and weary of Facebook.
Mark Zuckerberg:Are you experiencing issues with Facebook? Yes, that’s what I mentioned.
Isn’t Mark Zuckerberg the head of Facebook? No, Mark Zuckerberg is a bankruptcy attorney from Indianapolis.
Oh, have we slipped into alternate realities once again? Give it a try. There might be several individuals around the globe with the same name.
Got it. Mark Zuckerberg (Indianapolis bankruptcy attorney)I’m fed up with Facebook (a barely usable social media platform established by another Mark Zuckerberg). There, that wasn’t too hard.
But why? Why do you suppose that is? Imagine possessing a Facebook account and sharing the name Mark Zuckerberg. Your existence would be inundated with messages, requests, and harassment.
That makes sense. Attorney Zuckerberg invested thousands in Facebook to market his law practice but continually disabled his account, suspecting Meta was impersonating a well-known figure. So now he is pursuing legal action against Meta.
I feel for those who share names with celebrities constantly. Same here. Consider John Lewis, a humble Virginian who has lost weeks of his life clarifying to strangers that he isn’t the large British department store chain, all because he holds the @Johnlewis handle on X, which leads to a lot of explaining.
What a disaster. Then there’s the late children’s author Jeremy Strong. He battled with his name for years until the TV series “Succession” gained popularity. He spent the latter part of his career apologizing to people for not being the actor who portrayed Kendall Roy.
Well, that’s unfortunate for him. It’s equally unfortunate for Attorney Zuckerberg. Prior to the lawsuit, he had been documenting all the events occurring since the younger Mark Zuckerberg became well-known.
Oh, really? What has that been like? He has faced false litigation from Washington state, yet companies are hesitant to drop his business, fearing he is part of a prank. He recalls seeing disappointment on the face of the limousine driver who picked him up. And when he tried 23andMe, he was bombarded with people who a) claimed to be related to him and b) sought money.
What a nightmare. Anyway, Meta has chosen to restore Mark Zuckerberg’s account and expressed regret for the mix-up, but the legal battle continues.
I wish him all the best. There’s also a precedent here. In 2019, designer Katy Perry sued singer Katy Perry for trademark infringement. Unsurprisingly for Indianapolis Zuckerberg, the singer won the appeal, forcing Katy Perry to register her trademark.
Bad timing for Katy Perry. Or for Mark Zuckerberg.
Say: “It’s tough having a name that belongs to a famous person.”
Don’t say: “My newborn son, Donald Trump, will soon find this out.”
A Victorian lawyer has made history as the first in Australia to garner professional sanctions for utilizing artificial intelligence in court, losing his right to practice as a leading attorney after generating unverified citations from AI.
According to a report by Guardian Australia, during a hearing last October on July 19, 2024, an unnamed lawyer representing her husband in a marital dispute provided the court with a list of prior cases that Judge Amanda Humphreys had requested regarding the enforcement of applications in this case.
Upon returning to her chamber, Humphreys stated in her ruling that neither she nor her colleagues could find any cases listed. When the issue was revisited in court, the lawyer disclosed that the list had been generated using AI-based legal software.
He confessed to not verifying the accuracy of the information before submitting it to the court.
The attorney extended an “unconditional apology” to the court, requesting not to be referred for investigation, saying he would “integrate lessons that he has taken to heart.”
He acknowledged his lack of understanding of how the software operated and recognized the necessity to verify the accuracy of AI-assisted research. He agreed to cover the costs incurred by the opposing lawyer due to the canceled hearing.
Sign up: AU Breaking NewsEmail
Humphreys accepted the apology, admitting that the stress it caused was unlikely to be repeated. However, given the prevalence of AI tools in the legal field, she noted that referrals for investigation were crucial due to the role of the Victorian Legal Services Commission in examining professional conduct.
The lawyer was subsequently referred to the Victorian Legal Services Commission for investigation, marking one of the first reported cases in Australia involving a lawyer using AI in court to produce fabricated citations.
The Victoria Legal Services Board confirmed on Tuesday that the lawyer’s practice certificate was altered on August 19 due to the findings of the investigation. This action means he no longer has the right to practice as a primary attorney, cannot handle trust funds, and is restricted to working solely as an employee’s lawyer.
The lawyer is required to undergo two years of supervised legal practice, with quarterly reports to the board from both him and his supervisor during this period.
A spokesman remarked, “The board’s regulatory actions on this matter reflect our commitment to ensuring that legal professionals using AI in their practices do so responsibly and in alignment with their obligations.”
Since this incident, over 20 additional cases have been reported in Australian courts where litigants or self-represented individuals used artificial intelligence to prepare court documents, leading to the inclusion of false citations.
The lawyer in Western Australia is also under scrutiny by its state regulatory body regarding practice standards.
In Australia, there was at least one instance where a document was claimed to have been prepared using ChatGPT solely for the court, even though the document was generated before ChatGPT became publicly accessible.
The courts and legal associations acknowledge the role of AI in legal proceedings but continue to caution that this does not diminish lawyers’ professional judgment.
Juliana Warner of Australia’s Legal Council told Guardian Australia last month, “If lawyers are using these tools, it must be done with utmost care, always keeping in mind their professional and ethical obligations to the court and their clients.”
Warner further noted that while the court’s relation to cases involving AI-generated false citations raises “serious concerns,” a blanket ban on the use of generative AI in legal proceedings “is neither practical nor proportional and risks hindering access to both innovation and justice.”
Elon Musk has threatened to take legal action against Apple on behalf of the AI startup Xai, alleging that the iPhone manufacturer is favoring OpenAI and breaching antitrust laws regarding App Store rankings. This statement drew a sharp response from OpenAI CEO Sam Altman and ignited a feud between the two former business associates at X.
“Apple is operating in a manner that prevents non-OpenAI AI companies from achieving the top position on the App Store. This clearly violates antitrust regulations. Xai is prepared to take swift legal measures,” Musk declared in a post on X.
In another post that day, he stated:
Currently, OpenAI’s ChatGPT occupies the top spot in the “Top Free Apps” category of the US App Store, while Xai’s Grok sits in fifth place. Apple has partnered with OpenAI to integrate ChatGPT across iPhone, iPad, and Mac. Neither Apple nor Xai provided any comments.
Altman replied to Musk on X, saying, “This is an unexpected claim considering we’ve heard Elon is attempting to manipulate X for his own benefit and to undermine his competitors, including those he dislikes.” Reports indicate that Musk has tweaked X’s algorithm to favor his own posts.
Altman and Musk co-founded OpenAI in 2015, but Musk departed the startup in 2018 and withdrew his funding after proposing they take control. Musk has since filed two times for a planned shift to commercial entities, alleging “Shakespeare’s proportional deception ceit.” Altman has characterized Musk as a bitter and envious ex-partner, resentful of the company’s achievements post-departure.
Musk responded to Altman’s tweet, stating, “You got 3 million views for dishonest posts. You’re a liar; despite having 50 times your followers, my engagement has far exceeded yours!”
Altman retorted to Musk several times, initially calling the lack of views a “skill issue” or “bot-related” before posing legal questions.
Users on X highlighted through the Community Notes feature that several apps, aside from OpenAI, have claimed top positions on the App Store this year.
For instance, the Chinese AI app Deepseek reached the No. 1 position in January, while Perplexity ranked first in the App Store in India in July.
One user inquired about Grok, X’s native AI. The chatbot replied: “Based on confirmed evidence, Sam Altman is correct.”
Musk’s remarks come as regulators and competitors heighten their scrutiny of Apple’s App Store dominance.
Earlier this year, an EU antitrust regulator ordered Apple to pay a fine of 500 million euros ($581.15 million).
In early 2024, the U.S. Department of Justice filed an antitrust lawsuit against Apple, accusing the iPhone manufacturer of establishing and maintaining “broad, persistent, and illegal” monopolies in the smartphone market.
Weeks after a federal appellate court mandated that Apple loosen the reins of CEO Tim Cook, his senior associate deliberated on the next steps.
For over ten years, Apple has insisted that apps utilize the App Store payment system, collecting a 30% commission on sales. However, in 2023, the court ruled that apps could bypass Apple’s payment system and allow users to purchase directly. Cook sought clarity on whether Apple could still impose fees on these sales without breaching the court’s directive.
Phil Schiller, responsible for overseeing the App Store, expressed concerns that the revised fees might be unlawful. He supported direct online sales without Apple’s commission. Luca Maestri, the company’s financial head, disagreed, advocating for a 27% commission to safeguard the business.
Ultimately, Cook sided with Maestri, attempting to rationalize this decision. A federal judge criticized the company in a recent ruling, accusing it of fabricating independent economic research to validate its choices and withholding thousands of documents under claims of attorney-client privilege. Furthermore, at least one executive allegedly misled the court.
The judge’s ruling, alongside witness testimonies this year and company documents disclosed Thursday, highlights the extreme measures Apple has taken to maintain every cent accrued from the App Store. Judge Yvonne Gonzalez Rogers, who presided over the initial lawsuit from Epic Games in 2020, could inadvertently impact Apple’s operations and hurt its credibility as scrutiny around the business intensifies.
Additionally, the company faces multiple legal challenges, including an antitrust lawsuit from the Department of Justice, which accuses it of maintaining a monopoly with its iPhone. Class Action Lawsuits from U.S. app developers and regulatory scrutiny from the U.K., Spain, and potentially China.
Mark A. Remley, a professor at Stanford Law, noted, “If you lose credibility with the court, the next judge may be less forgiving.” This situation could prompt future judges to suspect dishonesty during Apple’s subsequent cases.
Google’s corporate dealings have similarly cast a shadow over its legal processes. A recent judge noted in an antitrust case regarding Google’s advertising technology, the company’s attempts to obscure communications raised concerns about its adherence to court mandates.
In response to Judge Gonzalez Rogers’ ruling, Apple plans to appeal, asserting that the findings were “unjust” and deeming delays to the court order necessary. The company declined to provide further comments on this report.
In 2020, Epic, the creator of Fortnite, filed a lawsuit against Apple, alleging antitrust violations related to the mandated use of the App Store payment system. Although Judge Gonzalez Rogers ruled in Apple’s favor, asserting it wasn’t a monopoly, she highlighted that Apple breached California competition laws by requiring developers to use the App Store for software and services.
To comply with the court’s orders, Apple initiated a project termed “Wisconsin.” Two solutions were explored: one that would allow apps to include links for online purchases at designated locations without fees, and another that would require the app to charge a 27% commission for providing those links.
Without commissions and fees, Apple estimated potential losses totaling hundreds of millions, even exceeding a billion dollars. Opting for the 27% fee would minimize their losses.
In a June 2023 meeting, Cook evaluated commission options ranging from 20–27%. He reviewed analyses indicating that with a 27% commission, Apple could potentially lose its payment system while ultimately endorsing a plan that limited where app links for online purchases could be placed.
Consequently, Apple enlisted an economic consultancy to author reports to substantiate these fees, concluding that its developer tools and distribution services exceed 30% of an app’s revenue.
Apple also instituted a warning screen for online purchases. Cook instructed the team to enhance the warning to emphasize Apple’s commitment to privacy and security. “Rather than terminating their relationship with Apple, the company cannot be held accountable for the privacy or security of transactions made online,” he stated.
After introducing the 27% commission in January 2024, Epic brought Apple back to court, arguing it was not complying with the judge’s orders. Judge Gonzalez Rogers summoned both Apple and Epic to court, where Treasury VP Alex Roman testified that the commission had been finalized on January 16, 2024. Executives revealed that the consultancy report influenced the commission fee setting.
Judge Gonzalez Rogers expressed skepticism about Apple’s honesty and demanded documentation regarding their compliance. Apple submitted 89,000 documents, a third of which were marked confidential. The court dismissed these claims as “baseless,” stating Apple pressured them into concealing more than half the documents.
The findings indicated that Rome lied under oath, that the consultancy report was “deceptive,” and that Apple “willfully” ignored the court’s directives, as termed by Judge Gonzalez Rogers. She characterized this as “concealment.”
Her ruling may empower prosecutors, regulators, and judges in similar ongoing cases against Apple across the globe, according to various antitrust professors and lawyers.
When the company attempts to edit or conceal documents, it may draw the attention of prosecutors and judges to strategize against such “tactics to delay litigation,” especially in the Epic Games case. During testimonies, the credibility of Apple executives was called into question as it became apparent the company “conceals the truth.”
In other cases regarding Apple, such as the Department of Justice antitrust lawsuits, Colin Kass, an antitrust attorney for Proskauer Rose, indicated that the process will begin with a firm statement against Apple’s past tactics. “I won’t entertain any games they’ve played before,” he stated.
The company remains cautious regarding both the Justice Department’s lawsuit and its defense, noted Vanderbilt University law professor Rebecca Ho Allensworth, who studies antitrust. Apple previously claimed that green bubbles in messages from Android users were due to safety concerns. However, she suggested such claims may now be viewed skeptically following the recent ruling.
Allensworth remarked that the judges’ opinions could influence App Store practices, leading to enforced resolutions akin to those from the European Union, the U.K., and Spain—to guarantee regulatory and court confidence.
“Apple behaves as though it operates above the law,” she asserted. “This sends a clear message that such behavior is unacceptable.”
Last year, Vermont achieved a historic milestone by enacting the nation’s first climate superfund law, aimed at enabling the recovery of funds from fossil fuel companies to manage the escalating expenses associated with climate change.
This depends, however, on whether we can prevail against the mounting legal challenges.
Recently, the Department of Justice initiated a federal lawsuit, with Vermont being one of the states, alongside New York, to adopt the Climate Superfund Act. The lawsuit argues the measure is “a bold effort to seize federal authority” and forces others to subsidize state infrastructure expenditures.
Shortly after, West Virginia Attorney General John B. McCauskey announced he was spearheading another challenge against Vermont’s law, claiming it “encroaches upon American coal, oil, and natural gas producers.”
McCauskey had previously filed a similar lawsuit against New York, seeking $75 billion from oil and gas companies over the next 25 years. On Thursday, he warned that the Vermont version could be “even more perilous” as it lacks a financial cap.
He, along with 23 other attorneys, is joining the lawsuit filed late last year by the American Petroleum Research Institute, an affiliate of the US Chamber of Commerce and the Federal Court of Vermont.
West Virginia is a significant source of natural gas and coal, and the complaint asserts that fossil fuel companies operate legally. It argues that “Vermont enjoys affordable and reliable fuels while simultaneously punishing those who produce such energy.”
The Climate Superfund Act is patterned after the federal Superfund program, which aims to clean up hazardous waste sites. This program has been operational for decades, ensuring that businesses contributing to contamination help finance the cleanup.
The new climate superfund law stems from the understanding that the burning of fossil fuels—which generates carbon dioxide and other greenhouse gases—is a primary driver of climate change. Consequently, the law permits states to pursue funding from fossil fuel producers to mitigate the costs of global warming. Similar legislative initiatives are gaining traction in states like California, New Jersey, and Massachusetts.
Patrick Derprue, an expert in environmental law in Vermont, characterized the Justice Department’s case as “a display of virtue signaling” and anticipates a dismissal. He expects the state will argue that the Chamber of Commerce’s lawsuit is premature, given that officials are still determining how the law will be applied and are not directly implicated.
Julie Moore, the secretary of the Vermont Natural Resources Agency, indicated her involvement in both filings and stated her office is reviewing the specifics. She noted that the Justice Department’s actions were “not unforeseen” in light of President Trump’s April 8 executive order, which aims to “protect America’s energy from federal overreach.”
This order explicitly mentions the new laws in Vermont and New York, deeming them threats to national economic and security interests.
Letitia James, the New York Attorney General, who is named in the DOJ lawsuit, stated that the Climate Superfund Act “will ensure that those responsible for the climate crisis contribute to remedying the damages they have inflicted.”
Meghan Greenfield, an environmental attorney with prior experience at the DOJ and the Environmental Protection Agency, now a partner at Jenner & Block, remarked that legal conflicts regarding such new laws are inevitable. Some arguments relevant to these measures are novel and untested, revolving around the concept of “equal sovereignty” between states, which posits that states should be equitably treated by the federal government.
“We are navigating complex legal landscapes, with new types of laws and challenges emerging, making predictions difficult,” she noted.
She also expressed anticipation for further confrontations regarding more conventional state climate regulations, particularly those in New York and California.
Former President Sinn Fair Jerry Adams is contemplating legal action against Meta for potentially using his book to train artificial intelligence.
Adams claims that Meta, and other tech companies, have incorporated several books, including his own, into a collection of copyrighted materials for developing AI systems. He stated, “Meta has utilized many of my books without obtaining my consent. I have handed the matter over to lawyers.”
On Wednesday, Sinn Féin released a statement listing the titles that were included in the collection, which contained a variety of memoirs, cookbooks, and short stories, including Adams’ autobiography “Before the Dawn: Prison Memoirs, Cage 11; Reflections on the Peace Process, Hope, and History in Northern Ireland.”
Adams joins a group of authors who have filed court documents against Meta, accusing the company of approving the use of Library Genesis, a “shadow library” known as Libgen, to access over 7.5 million books.
The authors, which include well-known names such as Ta-Nehisi Coates, Jacqueline Woodson, Andrew Sean Greer, Junot Díaz, and Sarah Silverman, have alleged that Meta executives, including Mark Zuckerberg, knew that Libgen contained pirated material.
Authors have identified numerous titles from Libgen that Meta may have used to train its AI system, Llama, according to a report by the Atlantic Magazine.
The Authors Association has expressed outrage over Meta’s actions, with Chair Vanessa Fox O’Laurin stating that Meta’s actions are detrimental to writers as it allows AI to replicate creative content without permission.
Novelist Richard Osman emphasized the importance of respecting copyright laws, stating that permission is required to use an author’s work.
In response to the allegations, a Meta spokesperson stated that the company respects intellectual property rights and believes that using information to train AI models is lawful.
Last year, Meta launched an open-source AI app called Llama, a large language model similar to other AI tools such as Open Ai’s ChatGpt and Google’s Gemini. Llama is trained on a vast dataset to mimic human language and computer coding.
Adams, a prolific author, has written a variety of genres and has been identified as one of the authors in the Libgen database. Other Northern Ireland authors listed in the database include Jan Carson, Lynne Graham, Deric Henderson, and Anna Burns as reported by BBC.
The UK’s attempt to make details of its legal battle with Apple public has been unsuccessful.
The Investigatory Powers Court, responsible for investigating potential illegal actions by the national intelligence agency, rejected a request from the Home Office to keep “details” of the case confidential on Monday.
Presidents of the Investigatory Court, Judges Singh and Johnson, initially disclosed some aspects of the case on Monday.
They confirmed that the case involves Apple challenging the Home Office regarding a technical capability notice under the Investigatory Powers Act.
The Home Office argued that revealing the existence of the claim and the identities involved would jeopardize national security.
The judge stated, “We do not believe that disclosing specific details of the case would harm public interest or endanger national security.”
Reports from The Guardian and other media outlets claimed that the Home Office issued a Technical Capability Notice to Apple, seeking access to Apple’s advanced data protection services.
Apple has stated it will not comply with the notice, refusing to create a “backdoor” in its products or services.
Judges Singh and Johnson noted that neither Apple nor the Home Office confirmed or denied the accuracy of the Technical Capability Notice and media reports on its contents.
The judge added, “This ruling should not be taken as confirmation of the accuracy or inaccuracy of media reports. Details about the Technical Capability Notice remain undisclosed.”
A journalist was denied access to a hearing last month related to the incident.
Various media organizations requested the court to confirm the participants and the public nature of the hearing on March 14th.
Neither journalists nor legal representatives were allowed at the hearing, with the identities of the involved parties remaining anonymous beforehand.
The judges mentioned the potential for future hearings to have public elements without restrictions, but the current stage of the process does not allow it.
Recipients of Technical Capability Notices cannot reveal the order unless authorized by the Home Secretary, and hearings should only be private if absolutely necessary, as per the rule on the court’s website.
Ross McKenzie, a data protection partner at Addleshaw Goddard law firm, stated that despite the ruling, it is unlikely that detailed information regarding the Home Office’s case for accessing Apple user data will be disclosed.
An Interior Ministry spokesperson declined to comment on the legal proceedings but emphasized the importance of investigative powers in preventing serious threats against the UK.
Apple chose not to provide a comment on the matter.
In New York, 12 US copyright lawsuits against Openai and Microsoft have been consolidated, with authors and news outlets suing the companies for centralization.
According to a Transfer order from the U.S. Judicial Commission on Multi-District Litigation, centralization can help coordinate findings, streamline pretrial litigation, and eliminate inconsistent rulings.
Prominent authors like Ta-Nehisi Coates, Michael Chabon, Junot Díaz, and comedian Sarah Silverman brought the incident to California, but it will now be moved to New York to join news outlets such as The New York Times. Other authors like John Grisham, George Sounders, Jonathan Franzen, and Jody Picoll are also involved in the lawsuits.
Although most plaintiffs opposed the merger, the transfer order addresses factual questions related to allegations that Openai and Microsoft used copyrighted works without consent to train large-scale language models (LLM) for AI products like Openai’s ChatGPT and Microsoft’s copylot.
Openai initially proposed consolidating the cases in Northern California, but the Judiciary Committee moved them to the Southern District of New York for the convenience of parties and witnesses and to ensure a fair and efficient conduct of the case.
High-tech companies argue that using copyrighted works to train AI falls under the doctrine of “fair use,” but many plaintiffs, including authors and news outlets, believe otherwise.
An Openai spokesperson welcomed the development, stating that they train on publicly available data to support innovation. On the other hand, a lawyer representing Daily News looks forward to proving in court that Microsoft and Openai have infringed on their copyrights.
Some of the authors suing Openai have also filed suits against meta for copyright infringement in AI model training. Court filings in January revealed allegations against Meta CEO Mark Zuckerberg for approving the use of copyrighted materials in AI training.
Amazon recently announced a new Kindle feature called “Recaps” that uses AI to generate summaries of books for readers. While the company sees it as a convenience for readers, some users have raised concerns about the accuracy of AI-generated summaries.
The UK government is addressing peer and labor concerns about copyright proposals, and companies are being urged to assess the economic impact of their AI plans.
Jerome Dewald sat with his legs crossed, his hands folded in his lap before a New York judge’s appeal panel, ready to argue for a reversal of the lower court’s decision in a dispute with his former employer.
The court had allowed Mr Dewald, who represented himself, not his lawyer, to involve his arguments in a pre-recorded video presentation.
When the video began to play, it showed that a man younger than Dewald’s 74-year-old was standing in a blue-collar shirt and beige sweater, wearing a blue-collar shirt and a beige sweater, in front of what appeared to be a blurry virtual background.
Seconds after the video, one of the judges confused by the on-screen image asked Dewald if the man was his lawyer.
“I generated it,” replied Dewald. “It’s not a real person.”
Judge Sally Manzanette Daniel, the first Judicial Division of the Appellate Division, temporarily suspended. It was clear that she was unhappy with his answer.
“It’s good to know that when you created your application she snapped him.”
“I’m not grateful for being misunderstood,” she added before someone yells at me to turn off the video.
What Dewald didn’t disclose is that he created digital avatars using artificial intelligence software, the latest example of AI sneaking into the US legal system in a potentially troublesome way.
Dewald, plaintiff in the case reached Friday, said he was overwhelmed by the embarrassment of the hearing. He then sent an apology letter to the judge soon after, expressing his deep regret and saying that he admitted that his actions “cautiously mislead” the court.
He said he relied on using the software after stumbling over his words in previous legal proceedings. He thought that using AI in his presentation might help ease the pressure he felt in court.
He said he had planned to create a digital version of himself, but did so he encountered “technical difficulties.”
“My intention was not to deceive, but to present my argument in the most efficient way possible,” he said in a letter to the judge. “But we recognize that appropriate disclosure and transparency must always be prioritized.”
Dewald, a self-proclaimed entrepreneur, had sued previous ruling in a contract dispute with his former employer. He eventually presented oral arguments at the appeals court, frequently pausing and frequently pausing to reorganize and read the statements he had prepared and prepared from his cell phone.
As embarrassing as he was, Dewald was able to provide some comfort to the fact that an actual lawyer got into trouble in using AI in court.
In 2023, New York State lawyers faced serious consequences after him I created a legal brief using CHATGPT Filled with false judicial opinions and legal quotations. The incident showed flaws relying on artificial intelligence and echoed through legal trade.
That same year, former President Trump’s lawyer and fixer Michael Cohen provided his lawyer with a fake legal quote he obtained from Google Bard, an artificial intelligence program. Cohen ultimately pleaded mercy from a federal judge who was the main side of his case, emphasizing that he had no idea that the generated text service could provide false information.
Some experts say artificial intelligence and large-scale language models can be useful for people who have legal problems to deal with but can’t afford a lawyer. Still, the risks of technology remain.
“They can still hallucinate. “We need to deal with that risk,” says Daniel Singh, assistant research director at the Law and Court Technology Center at William & Mary Law School.
Hyundai, a motor maker, is currently facing legal action due to allegations that one of its most popular electric vehicle models can be easily stolen in seconds. Digital security expert Elliot Ingram was shocked to see CCTV footage of a hooded burglar stealing his Hyundai Ioniq 5 in under 20 seconds from his home.
It is believed that the thief used devices available online to replicate the car’s electronic key. This incident is just one of many thefts involving this vehicle, with many owners now resorting to steering locks for added security. Ingram’s car was eventually recovered by the police, but he has decided to terminate the lease and is seeking compensation from the car company. He argues that the Korean automotive giant should have informed customers about the security vulnerabilities.
“The security system is completely compromised, making it susceptible to attacks by anyone,” he stated. “It’s no longer effective.”
Hyundai has been promoting the convenience of digital and smart keys, allowing drivers to lock/unlock the door and start the engine with just the key fob or digital key. While the new technology includes various security measures, criminal groups have found ways to bypass them.
Ingram discovered a key emulator device being sold online for 15,000 euros. This device resembles Nintendo’s Game Boy Game Console and can operate in English or Russian. It has the capability to record signals from the car and replicate them within seconds, allowing for easy unauthorized access. Last year, the automotive industry admitted to ignoring warnings over a decade ago regarding the risks associated with keyless technology and vehicle theft.
Hyundai has responded by stating that there is an industry-wide issue with organized criminal groups using electronic devices to bypass smart key lock systems. They are collaborating with law enforcement to better understand these devices and track stolen vehicles. The company is working on an update to reduce the risk of keyless theft for vehicles sold since February 2024 and is planning to provide retroactive action for earlier models.
Vehicle theft has been on the rise in the UK and Wales, with a significant increase in the use of remote devices by criminals. Legislation is being introduced to ban electronic devices used for keyless vehicle theft, with severe penalties for those found in possession of, manufacturing, importing, or distributing such devices.
In conclusion, Hyundai is focused on enhancing vehicle security to combat theft, but they do not plan to recall the vehicle. Despite the updates and measures being implemented, the company acknowledges the challenge posed by determined criminals who will stop at nothing to steal vehicles for various purposes.
India’s IT Ministry has unlawfully extended its censorship authority to facilitate the removal of online content and allow “countless” government officials to enforce such orders.
The lawsuit and accusations indicate the escalation of the ongoing legal dispute between X, who is being instructed by New Delhi to take down content, and Indian Prime Minister Narendra Modi. This comes as Musk prepares to launch Starlink and Tesla in India.
In a recent court filing dated March 5, X argues that India’s IT ministry is utilizing a government website launched by the Home Office last year to issue content blocking orders and compel social media companies to participate on the website. According to X, the process lacks stringent Indian legal safeguards concerning content removal, requiring the issuance of an order in cases of sovereignty or public order harm and involving strict monitoring by top officials.
India’s IT Ministry redirected a request for comment to the Home Office, but did not respond.
The government’s website stated it was attempting to counter the directive by establishing an “unacceptable parallel mechanism” that would lead to “unchecked censorship of Indian information.”
X’s court documents have not been publicly released and were initially reported by the media on Thursday. The case was briefly heard earlier this week by a judge from the Southern High Court of Karnataka, but a final decision was not reached. The next hearing is scheduled for March 27th.
In 2021, X, previously known as Twitter, faced a dispute with the Indian government over defying a legal order to block certain tweets related to farmers’ protests against government policies. X eventually complied after facing backlash from the public, but the legal challenge remains ongoing in Indian courts.
The legal battle between the US tech company and the UK government over access to customer data saw a closed-door hearing on Friday after the press was unable to enter the courtroom for the lawsuit.
Apple has appealed to the Investigation Power Court after the Home Office requested access to encrypted data stored on Apple’s cloud servers.
British media outlets such as The Guardian, The BBC, The Financial Times, and Computer Weekly tried to gain access to the court for public interest reasons but were denied entry.
The government’s representative in the case, Sir James Eady KC, was seen entering the court on Friday.
Apple is contesting technical capacity notices issued under the Investigation Powers Act, which require assistance from businesses in providing evidence to law enforcement. The notice requested access to Apple’s Advanced Data Protection (ADP) service, which encrypts personal data stored remotely on a server.
Apple refused to comply with the order and challenged it in court, raising concerns about the legality of the national intelligence agency’s actions. Apple also pulled ADP from the UK, stating they have never created backdoor keys or master keys for their products or services.
ADP employs end-to-end encryption, ensuring that only the account owner can decrypt the data. Messaging services like iMessage and FaceTime are also end-to-end encrypted by default.
The government’s legal demands, known as Technical Capacity Notices, prohibit recipients from disclosing the order unless authorized by the Secretary of the Interior. Court hearings are supposed to be closed to the public only if strictly necessary to protect national security.
A bipartisan group of US lawmakers called for transparency regarding the UK government’s orders and urged further hearings and proceedings to shed light on the issue.
Reports suggest that British officials have started discussions with US counterparts to ensure that they are not seeking blanket access to US data, only information related to serious crimes like terrorism and child sexual abuse.
Nigeria has filed a lawsuit seeking $79.5 billion from the government for economic losses caused by $2 billion in cryptocurrency exchange operations and back taxes, according to court documents filed on Wednesday.
Authorities have criticized Binance, the world’s largest cryptocurrency exchange, blaming it for the devaluation of the Nigerian currency. Two executives of the company were arrested in 2024 after local Naira trading websites emerged as popular platforms. Binance, which is not registered in Nigeria, has not yet commented on the situation.
The Nigerian Federal Internal Revenue Service (FIRS) claims that Binance owes corporate income tax due to its significant economic presence in the country. FIRS is seeking income tax payments for 2022 and 2023, along with a 10% annual penalty on the outstanding amounts. Additionally, FIRS is demanding an unpaid tax rate of 26.75% based on the interest rate of the Nigerian central bank.
Nigeria is already facing four counts of tax evasion related to the cryptocurrency industry, including non-payment of VAT, company income tax, failure to file tax returns, and conspiracy to help customers evade taxes through the platform.
In response to the allegations, Binance announced in March that it had halted all Naira transactions. The company is also facing separate allegations of institutional money laundering, which it has denied.
Elon Musk seems to have many preferences. The world’s richest man is evangelical about electric cars, space travel, and Donald Trump. Another of his interests could have a significant impact on British politics.
The billionaire is reportedly considering paying a rumored £80m to Nigel Farage’s British Reform Party, becoming its biggest donor in history.
Musk watchers say that, like many who supported Trump’s militant brand of right-wing populism, he became radicalized by frustration with the lockdowns.
Frustrated by the damage to manufacturing at Tesla car factories, he began spending more time online and testing the limits of the misinformation rules set by Twitter, as it was then known. Ta.
Now that he helped propel Trump to the White House, he is reportedly turning his attention to Britain.
Reform officials say they have no knowledge of Mr. Musk’s spending plans, which Mr. Musk also denies. But if the Tesla and X owners back up their online criticism of Keir Starmer’s government with huge donations to the Labor opposition, it could be one of the most significant political moves of this parliament.
Within two years of acquiring Company X (formerly Twitter) in October 2022, Mr. Musk has already become a darling of the international far-right, and under the banner of free speech has previously suspended his account. Thank you for reviving it. But Musk went further, using his account to amplify the messages of far-right activist and convicted criminal Stephen Yaxley-Lennon, also known as Tommy Robinson.
By the time riots erupted in British cities this year, Mr Musk had engaged in a full-scale onslaught against the Labor government, claiming “a civil war is inevitable” and echoing that position, calling the prime minister “two-tiered”. Police reportedly treated white far-right “protesters” more harshly than minorities.
But over the weekend there were hints that Mr Musk might trade words and actions regarding the UK when the Sunday Times reported: He may be about to donate £80m He was a supporter of Nigel Farage’s British Reform Party and believed that the MP would be the next British Prime Minister.
Mr Musk denied the claims on Thursday, but Reform UK has remained noticeably silent on the matter, while Mr Farage boasted last month that he was counting on the support of his “new friend Elon” in the next general election. I was doing it. A major donor to his party even said quite bullishly to the Guardian this week: “Keep an eye on this area.”
Mr Musk’s wealth has increased by $133bn (£104.4bn) so far this year, reaching $362bn from his roughly 13% stake in Tesla and ownership in a number of companies.
The reasons behind Mr Musk’s apparent hostility towards Starmer and interest in Britain may be more complex.
Various theories about why the UK has been targeted by Mr Musk include the idea that he has come to view the UK as the epicenter of what he calls the ‘waking mind virus’. , blames Musk for his estranged daughter’s gender change. .
An even more outlandish theory, based partly on Musk’s time with X, is that Musk’s tweets in response to breaking news in the UK are a result of his tendency to stay up late in the US is.
“I don’t think you should tweet after 3am,” Musk told the BBC last year.
But one of the most obvious explanations is Musk’s own liberal, ultra-free speech vision that X is the true “town square” of the internet, and Labor’s mission to crack down on online hate speech. It is related to a clear conflict between
Mr Musk is “not accountable to anyone”, Peter Kyle, the UK science and technology secretary who is directly responsible for the government’s engagement with social media companies, complained in August. Also irritating Mr. Musk, Mr. Starmer’s current chief of staff has been involved in the creation of the Center for Countering Digital Hate (CCDH), which criticizes Mr. Musk for stripping away guardrails against hate speech on Twitter. This is likely a role played by Labor Party officials, including Morgan McSweeney, who is the head of the party. . In October, Musk issued a “declaration of war” on CCDH, calling it a “criminal organization” and saying he would “go after” it.
But there is no sign that holding Mr Musk to account will stop Britain’s move into right-wing politics. Beyond the near-relentless torrent of tweets, it’s even more uncertain how Mr Musk will expand his footprint in British public life.
Mr. Musk could avoid strict regulations on overseas donations by providing the funds through Company X’s British arm or by securing British citizenship. Her father, Errol, claims he is eligible because his grandmother is British.
Mr Musk may also be tempted to take further discussions with British industry and engage further with Starmer’s government.
Mr Musk was last in the spotlight in the UK last November when he attended the first AI Safety Summit at Bletchley Park, home of the Enigma codebreakers. People who encountered him at the Bletchley summit said he was polite, talkative, surrounded by a surprisingly minimal entourage, and appeared to handle much of the official email about the event himself.
This convinced one former government adviser that discussing AI policy was probably the best way for Labor to forge a beneficial relationship with Mr Musk. The tech mogul, who founded his own AI company xAI, has consistently warned about the dangers of unchecked technology development. Speaking at the summit, he said, “There is a greater than zero chance that AI will kill us all.”
The former adviser said the creation of the UK AI Safety Institute by Rishi Sunak’s government, then the world’s first, could carry some weight for Mr Musk.
“He cares about the safety of AI, and has done so for years. A grown-up conversation with him about the UK’s world-leading work on national security risks from AI is a good place to start.” “I think Rishi Sunak will be a good ambassador even if Starmer finds out next,” the former adviser said. Politically undesirable. “Musk doesn’t suffer fools and Sunak really knows what he knows about AI.”
Another option would be to send Mr. Kyle and National Security Adviser Jonathan Powell, who were impressed with their understanding of the brief. “It would show seriousness,” the former adviser said.
The originator of TikTok’s “demure” catchphrase has begun to pay more attention to U.S. trademark law.
Jules Lebron, a social media influencer with over 2 million followers on the platform, skyrocketed to fame by sharing guidance on embodying “modesty,” “kindness,” and “cuteness” in both work and personal life. This trend has picked up steam, leading to collaborations with major brands like Verizon and Netflix featuring Lebron in sponsored content, as well as big-name celebrities such as Jennifer Lopez, Olivia Rodrigo, and Gillian Anderson incorporating the phrase into their own videos.
Recently, Lebron, who is transgender, expressed that the news surrounding her video had a significant impact on her life. A video showing her emotional reaction to this development was shared and then deleted on TikTok, where she disclosed that she had failed to register the trademark in time. According to TMZ, a man named Jefferson Bates from Washington submitted a trademark registration application for a slogan very similar to Lebron’s catchphrase, obviously attempting to capitalize on her success.
In response to this, Raluca Pop, the founder of Hive Social, a social media platform similar to Elon Musk’s X, stepped forward, stating that she had filed an application in California for the phrase “Very Demure Very Cutesy” as a gesture of solidarity with Lebron.
Popp further divulged that she took action after witnessing another individual’s attempt to appropriate Lebron’s words. Not wanting to see Lebron’s catchphrase exploited, Popp decided to secure the trademark and plans to later transfer it to Lebron to ensure she benefits from it.
If Bates’ trademark application receives approval, Lebron may find herself unable to use her catchphrase on any official merchandise or sponsored material in Washington without obtaining a federal trademark. However, trademark lawyers are optimistic that Lebron will be able to defend her rights against Bates’ claim of being “very modest, very considerate…”
Arie Elmanzer, an attorney and the founder of Influencer Legal, a law firm that assists content creators in resolving trademark and contract issues, remarked, “If I were her, I wouldn’t be worried. She was clearly the first to use it, and she should capitalize on it to strengthen her claim as the original creator.”
Elmanzer mentioned that Bates has lodged a $1 billion trademark application, asserting his intention to utilize the trademark. Elmanzer stated, “He claims he’ll use the trademark, but he hasn’t done so yet. This breaches the Trademark Act. When Lebron objects, she can argue that he hasn’t used it, but she has, backed by substantial evidence, providing her with an advantage.”
Additionally, U.S. trademark law grants rights to whoever first uses a mark, not necessarily the first to apply for it. “I have full confidence Revlon could mount a successful defense against this. While pathways exist to secure a trademark, it requires both time and financial investment.”
Kyona McGehee, an attorney and the founder of Trademark My Stuff law firm, emphasized that were she Lebron’s legal counsel, she would promptly issue a cease and desist letter to Bates, demanding withdrawal of his application, asserting full rights to the phrase, and outlining Lebron’s strategy for monetizing the trademark.
McGehee added, “Lebron must file for a federal trademark with the U.S. Patent and Trademark Office as that grants authority nationwide. Once Lebron secures federal registration, she won’t need anything further on the state level.”
Bates, residing in Washington, appears to have no connection either to Lebron, based in Chicago, or her catchphrase. Legal representatives for both parties speculate that if a restraining order fails to dissuade Bates, they may be embroiled in a lengthy legal dispute. In the meantime, Lebron should exploit her catchphrase however she sees fit.
“Just because she lacks a trademark presently doesn’t mean brands will think twice about incorporating her phrases to capitalize on the current momentum,” McGehee commented.
Lebron, originally from Puerto Rico, is making the most of her newfound stardom. She is engaging in sponsored content for “demure” with the hair care brand K18, teasing a potential collaboration with Netflix, and making an appearance after RuPaul guest-hosted the Jimmy Kimmel Show.
However, her copyright dilemma underscores a recurring issue for content creators whose original work becomes viral only to be leveraged by others for profit. In 2021, Black TikTok creators staged a strike in protest against the lack of credit for their work, highlighting disparities in recognition and treatment compared to white creators on the app.
“There’s a digital gap within minority communities,” McGehee noted. “It’s not a shortage of talent but rather a scarcity of information. Those with better resources and financial capabilities are better equipped to seize trend opportunities. At our firm, we advise clients: Act swiftly and file a trademark application when your work gains traction. In the legal realm, it’s more advantageous to take the offensive than play defense.”
Elon Musk has submitted a motion to dismiss a lawsuit against ChatGPT developer OpenAI and its CEO Sam Altman, claiming that the startup has deviated from its original goal of developing artificial intelligence for the betterment of humanity.
Musk filed the lawsuit against Altman in February, and the legal process has been progressing slowly in a California court. Up until Tuesday, Musk had not shown any intention of dropping the case. Just a month ago, his legal team filed an objection, leading to the presiding judge stepping down.
Musk’s motion to dismiss the lawsuit did not provide any rationale. A San Francisco Superior Court judge was set to consider arguments from Altman and OpenAI on Wednesday to have the lawsuit thrown out.
The dismissal brought an abrupt end to the legal dispute between two influential figures in the tech realm. Musk and Altman co-founded OpenAI in 2015, but Musk resigned from the board three years later following disagreements over the company’s governance and direction. Their relationship has become increasingly strained as Altman’s prominence has grown in recent years.
Musk’s lawsuit centered on his assertion that Altman and OpenAI breached the company’s “foundation agreement” by collaborating with Microsoft, transforming OpenAI into a predominantly profit-driven entity, and withholding its technology from the public.
OpenAI and Altman contested the existence of such an agreement, citing messages that appeared to show Musk supporting the shift towards a for-profit model. They vehemently denied any wrongdoing and published a blog post in March suggesting Musk’s motivations were rooted in jealousy, expressing regret that a respected figure had taken this course of action.
Musk’s lawsuit raised eyebrows among legal experts, who pointed out that certain claims, such as OpenAI achieving artificial intelligence equivalent to human intelligence, lacked credibility.
The lawsuit filed by comedian George Carlin’s estate against a comedy podcast that allegedly used artificial intelligence to mimic his voice has been settled. This case marked one of the first legal battles in the United States regarding the use of deepfakes to replicate celebrity personalities.
The Dudesy podcast, created by former Mad TV comedian Will Sasso and author Chad Krutgen, has agreed to remove all episodes from the internet and cease using Carlin’s voice, likeness, or image in any future content. A representative for Sasso, Daniel Dell, declined to comment on the matter.
The settlement was praised by Mr. Carlin’s family and estate attorney, although the terms of the agreement were not disclosed.
Kelly Carlin, George Carlin’s daughter, expressed her satisfaction with the swift resolution and responsible actions taken by the defendants. She emphasized the need for safeguards against the misuse of AI technology, not only for artists but for everyone.
Following the release of the Dudesy podcast special titled “George Carlin: I’m Glad He’s Dead,” the estate filed a lawsuit citing violations of Carlin’s publicity and copyright rights. The foundation claims the podcast is a disrespectful imitation of a renowned American artist’s work.
Despite initial claims that the podcast’s AI character, “Dudesy,” generated the content, it was later clarified that the fake Carlin set was entirely written by Krutgen and not AI-generated. The potential harm of such deepfake content circulating online was highlighted by Carlin’s estate.
The settlement coincides with growing concerns in the entertainment industry over artificial intelligence’s implications. Unauthorized use of generative AI tools and deepfake technology has prompted calls for stricter regulations to protect artists’ rights.
While the legal implications of AI-generated content remain uncertain, the case involving George Carlin’s estate underscores the need for safeguards against misuse of technology. The debate over whether AI-generated imitations qualify as parody under fair use laws is ongoing.
Josh Schiller, an attorney representing Carlin’s estate, emphasized the distinction between AI-generated impersonations and traditional forms of parody. The settlement sets a precedent for future cases involving the misuse of AI technology in creating counterfeit content.
OpenAI criticized Elon Musk’s lawsuit against the company in a legal response filed on Monday, calling the Tesla CEO’s claims “frivolous” and driven by “advancing commercial interests.”
The filing is a rebuttal to Musk’s lawsuit against OpenAI earlier this month, accusing the company of reneging on its commitment to benefiting humanity. OpenAI refuted many of the key allegations in Musk’s lawsuit, denying the existence of what he referred to as an “establishment agreement.”
The filing highlighted the complexity and lack of factual basis for Musk’s claims, pointing out the absence of any actual agreement mentioned in the pleadings.
The conflict between OpenAI and Musk has been escalating since Musk’s lawsuit, intensifying the ongoing disagreement between Musk and OpenAI CEO Sam Altman. Although they co-founded the nonprofit in 2015, disputes over company direction and control led to Musk’s departure three years later. The relationship between Musk and Altman has soured as OpenAI gained recognition for products like ChatGPT and DALL-E.
Musk’s lawsuit accuses OpenAI of straying from its original mission as a nonprofit organization focused on sharing technology for humanity’s benefit, alleging that Altman received significant investments from Microsoft. OpenAI denied these claims in a recent blog post, stating that Musk supported the shift to a for-profit entity but wanted sole control.
OpenAI’s response painted Musk as envious and resentful of the company since starting his own commercial AI venture. The filing dismissed the notion of a founding agreement between Musk and Altman, labeling it as a “fiction” created by Musk.
According to the response, Musk’s motivation for suing OpenAI is to bolster his competitive position in the industry, rather than genuine concerns for human progress.
George Osborne has been hired by Coinbase, a U.S. cryptocurrency exchange operator that is facing an intense legal battle with U.S. regulators.
The San Francisco-based company announced Wednesday that it has appointed the former British Prime Minister to its advisory board and will “lean on his insight and experience as we grow Coinbase around the world.”
Mr. Osborne’s appointment will be to the Securities and Exchange Commission (SEC). suing coinbase, accused it of acting as an intermediary in cryptocurrency transactions while circumventing disclosure requirements meant to protect investors. The company disputes this claim and is fighting it in court.
This is the latest in a series of high-paying jobs Mr Osborne has held since leaving government in 2016. At one point, Mr Osborne had nine jobs, ranging from newspaper editing and financial management to providing guidance and advice to the government on leveling the North of England.
Osborne left some of his work behind when he joined boutique investment banking advisor Robbie Warshaw as a partner in 2021. Mr Osborne last year collected part of his £28m remuneration for his work at the company. His salary at Coinbase has not been disclosed.
“There is a tremendous amount of exciting innovation happening in the financial industry right now,” Osborn said of his appointment to Coinbase. “Blockchain is transforming financial markets and online transactions. Coinbase is at the forefront of these developments. I look forward to working with the team as we build a new future for financial services.”
Faryar Shirzad, Chief Policy Officer at Coinbase, said: “We are delighted to welcome George to our Board at an exciting time for us both in the UK and globally.”
“George has extensive experience in business, journalism, and government. We look forward to relying on his insight and experience as we grow Coinbase around the world.”
Osborne’s other current jobs include: Chairman of the Northern Powerhouse Partnership. Chair of the British Museum. “Distinguished Visiting Scholar” at the Hoover Institution. He is a visiting professor at Stanford University’s Graduate School of Business, where he teaches a course on decision making. He is chairman of Lingotto Investment Management, the $3 billion investment fund of Italy’s billionaire Agnelli family’s Exor Group, which owns large stakes in Juventus FC, The Economist and Ferrari.
Tesla wants to suspend a federal lawsuit against it for racial bias against black workers at its Fremont assembly plant.
The electric car maker said in a filing Monday in San Francisco federal court that the U.S. Equal Employment Opportunity Commission (EEOC) filed a lawsuit against Tesla in September as part of “harmful interagency competition” with the California civil rights agency. accused of rushing. The company sued the automaker last year on similar grounds.
The EEOC’s lawsuit alleges that Tesla violated federal law by condoning widespread and ongoing racial harassment of Black employees and retaliating against some employees who opposed the harassment. EEOC filings state that Black workers were accused of using slurs and epithets such as the N-word, variations such as “monkey,” “boy,” and “black bitch,” as well as racist graffiti that called for violence against Black people. There are detailed reports that it has withstood casual use. Other forms of abuse.
The California Civil Rights Division’s complaint against Tesla also includes similar examples of harassment from black workers.
Both lawsuits are pending in state court and allege that Tesla violated California anti-discrimination laws. The EEOC’s lawsuit also includes allegations that Tesla violated federal laws prohibiting racial discrimination and harassment in the workplace.
Tesla also faces a proposed class action lawsuit filed by workers in 2017 alleging racial harassment.
The EEOC did not immediately respond to TechCrunch’s request for comment.
Tesla’s Monday filing says a federal court should refuse to file a third lawsuit until the existing lawsuit is resolved. Lawyers for the automakers argued that prosecuting the three cases simultaneously would involve a “substantial duplication of effort,” risk “inconsistent court decisions,” and waste judicial resources.
Tesla is calling for something called the Colorado River Abstention Principle here. This is a legal principle that allows a federal court to recuse itself from hearing a case if there is a parallel case in a state court dealing with the same issue. The goal behind this principle is to avoid duplicative litigation and promote more efficient justice.
The turf battle Tesla refers to in its filing is between the EEOC and the California Civil Rights Department (CRD), formerly the Department of Fair Employment and Housing. The filing argues that historically the EEOC and CRD have worked together to protect entities from being subject to the same lawsuits from both agencies.
“That historic coordination and cooperation has disintegrated as agencies have become increasingly eager to file headline-grabbing complaints and report multi-million dollar settlements,” the filing said. It is stated in
Tesla has repeatedly denied wrongdoing in multiple racial discrimination incidents. Monday’s filing called the allegations “false” and accused the EEOC of “hastily covering them up.”[ping] Launching a bogus pre-litigation investigation. ”
The company is also appealing a $3.2 million award in a separate racial bias lawsuit to a black former contractor at the Fremont plant.
This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.
Strictly Necessary Cookies
Strictly Necessary Cookie should be enabled at all times so that we can save your preferences for cookie settings.