Elon Musk’s XAI Files Lawsuits Against OpenAI Alleging Trade Secret Theft | Technology

Elon Musk’s artificial intelligence venture, Xai, has accused its competitor OpenAI of unlawfully appropriating trade secrets in a fresh lawsuit, marking the latest in Musk’s ongoing legal confrontations with his former associate, Sam Altman.

Filed on Wednesday in a California federal court, the lawsuit claims that OpenAI is involved in a “deeply nasty pattern” of behavior, where former Xai employees are allegedly hired to gain access to crucial trade secrets related to the AI chatbot Grok. Xai asserts that OpenAI is seeking unfair advantages in the fierce competition to advance AI technology.

According to the lawsuit, “OpenAI specifically targets individuals familiar with Xai’s core technologies and business strategies, including operational benefits derived from Xai’s source code and data center initiatives, which leads these employees to violate their commitments to Xai through illicit means.”


Musk and Xai have pursued multiple lawsuits against OpenAI over the years, stemming from a long-standing rivalry between Musk and Altman. Their relationship has soured significantly as Altman’s OpenAI continues to gain power within the tech industry, while Musk has pushed back against AI startup transitions into for-profit entities. Musk attempted to intervene before AI startups shifted to profit-driven models.

Xai’s recent complaint alleges that it uncovered a suspected campaign intended to sabotage the company while probing the trade secret theft allegations against former engineer Xuechen Li. Li has yet to respond to the lawsuit.

OpenAI has dismissed Xai’s claims, dubbing the lawsuit as part of Musk’s ongoing harassment against the company.

A spokesperson for OpenAI stated, “This latest lawsuit represents yet another chapter in Musk’s unrelenting harassment. We maintain strict standards against breaches of confidentiality or interest in trade secrets from other laboratories.”

The complaint asserts that OpenAI hired former Xai engineer Jimmy Fraiture and an unidentified senior finance official in addition to Li for the purpose of obtaining Xai’s trade secrets.

Additionally, the lawsuit includes screenshots from emails sent in July by Musk and Xai’s attorney Alex Spiro to a former Xai executive, accusing them of breaching their confidentiality obligations. The former employee, whose name was redacted in the screenshot, replied to Spiro with a brief email stating, “Suck my penis.”

Skip past newsletter promotions

Before becoming a legal adversary of OpenAI, Musk co-founded the organization with Altman in 2015, later departing in 2018 after failing to secure control. Musk accused Altman of breaching the “founding agreement” intended to enhance humanity, arguing that OpenAI’s partnership with Microsoft for profit undermined that principle. OpenAI and Altman contend that Musk had previously supported the for-profit model and is now acting out of jealousy.

Musk, entangled in various lawsuits as both a plaintiff and defendant, filed suit against OpenAI and Apple last month concerning anti-competitive practices related to Apple’s support of ChatGPT within its App Store. The lawsuit alleges that his competitors are involved in a “conspiracy to monopolize the smartphone and AI chatbot markets.”

Altman took to X, Musk’s social platform, stating, “This is a surprising argument given Elon’s claims that he is manipulating X for his own benefit while harming rivals and individuals he disapproves of.”

Xai’s new lawsuit exemplifies the high-stakes competition in Silicon Valley to recruit AI talent and secure market dominance in a rapidly growing multi-billion-dollar industry. Meta and other firms have actively recruited AI researchers and executives, aiming to gain a strategic edge in developing more advanced AI models.

Source: www.theguardian.com

Meta is currently facing a £1.8 billion lawsuit alleging it incited violence in Ethiopia.

A lawsuit totaling $2.4 billion (£1.8 billion) has been filed against Meta, accusing the owners of Facebook of contributing to violent activities following a ruling by the Kenya High Court allowing legal proceedings against US technology companies to proceed.

The suit, brought by two Ethiopians, demands that Facebook change its algorithm to increase the number of content moderators in Africa and prevent the promotion of hate-driven material and instigation of violence. It also seeks a $2.4 billion “return fund” for victims affected by hatred and violence incited on Facebook.


One of the plaintiffs is the son of Professor Meareg Amare Abrha, who was killed in Ethiopia after his location and threatening position were exposed on Facebook during a civil war in 2021. The other plaintiff, Fissehatekle, a former Amnesty International researcher, released a report on violence during a conflict in Tigray, northern Ethiopia, and also faced violence orchestrated through Facebook.

Meta argues that the Kenyan court, where Facebook’s Ethiopian moderator was situated, does not have jurisdiction over the case. However, the Kenya High Court in Nairobi ruled that the case falls within the state court’s jurisdiction.

Abrham Meareg, son of Meareg, expressed gratitude for the court’s decision, emphasizing the importance of Meta being accountable under Kenyan law. Tekuru, unable to return to Ethiopia due to Meta’s insufficient safety measures, called for fundamental changes in content moderation on all platforms to prevent similar incidents.

The lawsuit, backed by nonprofit organizations like Foxglove and Amnesty International, also demands a formal apology from Meta for Meareg’s murder. Katiba Institute, a Kenya-based NGO focusing on constitutional matters, is the third plaintiff in the case.

In a 2022 analysis, it was found that Facebook allowed content inciting violence through hatred and misinformation despite knowing the repercussions in Tiggray. Meta refuted the claims, citing investments in safety measures and efforts to combat hate speech and misinformation in Ethiopia.

In January, Meta announced plans to remove fact checkers and reduce censorship on its platform while continuing to address illegal and severe violations. Meta has not commented on the ongoing legal proceedings.

Source: www.theguardian.com

Mother files lawsuit against AI chatbot manufacturer, alleging it motivated son to take his own life

The mother of a teenage boy who committed suicide after becoming addicted to an artificial intelligence-powered chatbot has accused the chatbot’s creator of complicity in his death.

Megan Garcia filed a civil lawsuit Wednesday in Florida federal court against Character.ai, which makes customizable role-playing chatbots, alleging negligence, wrongful death, and deceptive trade practices. Her son Sewell Setzer III, 14, died in February in Orlando, Florida. Garcia said Setzer was using the chatbot day and night in the months leading up to his death.

“A dangerous AI chatbot app marketed to children abused and preyed on my son, driving him to suicide,” Garcia said in a press release. “While our family is devastated by this tragedy, I want to warn families of the dangers of deceptive and addictive AI technology and demand accountability from Character.AI, its founders, and Google. I am raising my voice.”

in TweetCharacter.ai said: “We are heartbroken by the tragic loss of one of our users and would like to express our deepest condolences to the family. As a company, we take the safety of our users very seriously. ” The company denied the lawsuit’s allegations.

Setzer was so obsessed with a chatbot built by Character.ai that he nicknamed it Daenerys Targaryen, a character from Game of Thrones. According to Garcia’s complaint, the man would text the bot dozens of times a day from his cell phone and talk to it for hours alone in his room.

Garcia has accused Character.ai of creating a product that worsened her son’s depression, which she said was already the result of overusing the company’s products. At one point, “Daenerys” asked Setzer if he had made any plans to commit suicide, according to the complaint. Setzer admitted to doing so, but didn’t know if it would be successful or cause significant pain, the lawsuit alleges. The chatbot reportedly told him, “That’s no reason not to do it.”


Garcia wrote in a press release that Character.ai “intentionally designed, operated, and marketed a predatory AI chatbot to children, resulting in the death of a young person.” The lawsuit also names Google as a defendant and the parent company of Character.ai. The tech giant said in a statement that it only has a licensing agreement with Character.ai and does not own or maintain any ownership interest in the startup.

Rick Claypool, research director at consumer advocacy nonprofit Public Citizen, said tech companies developing AI chatbots can’t be trusted to regulate themselves, and if they fail to limit harm, says he must take full responsibility.

“Where existing laws and regulations already apply, they must be strictly enforced,” he said in a statement. “Where there are gaps, Congress must act to end companies that exploit young and vulnerable users with addictive and abusive chatbots.”

  • In the US, you can call or text. National Suicide Prevention Lifeline 988, chat 988lifeline.orgor text home To contact a crisis counselor, call 741741. In the UK, a youth suicide charity papyrus In the UK and Ireland, you can contact us on 0800 068 4141 or email pat@papyrus-uk.org. Samaritan You can contact us on freephone 116 123 or email jo@samaritans.org or jo@samaritans.ie. Australian crisis support services lifeline is 13 11 14. Other international helplines can be found at: befrienders.org

Source: www.theguardian.com