Disney and OpenAI Forge Unexpected Partnership – What’s Next?

Disney’s iconic Mickey Mouse character is set to appear in AI-generated videos

Greg Balfour Evans / Alamy

The leading AI firm and the premier entertainment company have made an unexpected agreement, allowing AI-generated versions of beloved characters from movies, TV, and comics. This deal might indicate that major copyright holders realize they’re unable to control the influx of AI tools available today.

The Walt Disney Company has entered into a partnership with OpenAI, permitting the AI company’s Sora video generation and ChatGPT image creation technologies to utilize over 200 of Disney’s most renowned characters. In contrast, Disney is currently in a legal battle with another AI firm, Midjourney, concerning alleged copyright infringement, claiming Midjourney intends to “blatantly incorporate and copy famous characters from Disney and Universal” into its tools. This lawsuit suggested that copyright owners were starting to take steps to protect their rights against what AI companies might misuse, but some analysts now view the agreement as a sign that Disney has opted to collaborate with rather than combat AI firms.

As of now, characters like Mickey Mouse and Minnie Mouse, Simba and Mufasa from The Lion King, and characters from Moana, as well as notable figures from Marvel and Lucasfilm’s Star Wars, are permissible for OpenAI users. However, while users can create videos featuring these characters, many of the voice rights are held by celebrities, as is the case with Tom Hanks voicing Woody in the Toy Story films, which remains prohibited.

Content creation using these characters will be available from early 2026, under a license agreement lasting three years.

According to statements released by both parties, the agreement was reached after OpenAI pledged to implement age-appropriate policies and “reasonable controls” to prevent underage users from accessing its products, alongside “robust controls to avert the generation of illegal or harmful content and respect for the rights of content owners regarding model output, as well as individuals’ rights to manage the use of their voice and likeness.”

In tandem with this, Disney has committed to a $1 billion equity investment in OpenAI, with an option to purchase additional shares in the rapidly expanding AI firm. Many characters presently available in OpenAI’s tools coincide with those mentioned in Disney’s lawsuit against Midjourney.

“This presents an exciting chance for the company to let audiences engage with our characters through perhaps the most advanced technologies and media platforms available today,” said Disney CEO Bob Iger, as he informed CNBC. “OpenAI values and respects our creativity.” Iger further acknowledged the remarkable growth of AI. OpenAI CEO Sam Altman remarked, “People genuinely want to connect with Disney characters and express their creativity in novel ways.”

Despite the optimistic statements, the agreement took many by surprise. “I was astonished because Disney is recognized for fiercely safeguarding its brand,” noted Katherine Flick from Staffordshire University. The company has historically defended the intellectual property of its characters, including efforts to keep Mickey Mouse from falling into the public domain, according to Rebecca Williams of the University of South Wales.

Conversely, some observers were less surprised by the partnership. “It was clear that Disney didn’t want to confront major tech firms like Google, OpenAI, and Meta, as they’ve often perceived generative AI as beneficial,” remarked Andres Guadamuz from the University of Sussex.

Guadamuz hypothesizes that the OpenAI partnership could significantly benefit Disney, suggesting, “I suspect they will utilize their vast catalog to adapt their models,” which might even play a role in the animation process. Reports indicate that Disney is poised to become a “key customer” for OpenAI tools.

Williams expresses concern that this partnership may indicate the broader trajectory of AI and copyright disputes. “This suggests that companies like Disney consider it impossible to halt the AI tide,” she notes. “Their approach appears to involve collaborating with such enterprises to derive profit from the utilization of their intellectual property, rather than allowing it to be misappropriated.”

However, Ty Martin from the licensing company Copyrightish believes that other AI firms will start to negotiate licensing agreements moving forward. “This is the direction we’re heading in 2026,” he asserts. “Licensing is vital for quality. AI platforms equipped with strong, recognizable IP are likely to weather downturns, while unlicensed or generic content risks being overlooked.”

Whether this represents a proactive initiative or a defensive tactic due to animosity, the future of this initial three-year agreement is uncertain, and Frick believes it may soon be reevaluated. “There will be individuals who exploit their brand in ways that Disney may not typically endorse,” she stated.

Frick added, “This will serve as an evaluative case to see how this intellectual property is utilized. Personally, I suspect it will be a test to understand the limits of its usage, as [Disney] endures individuals engaging in potentially uncomfortable applications of your intellectual property.”

Topics:

  • artificial intelligence/
  • A.I.

Source: www.newscientist.com

Sam Altman Declares ‘Code Red’ for OpenAI Amidst ChatGPT’s Growing Competition

Sam Altman has issued a “code red” for OpenAI to enhance ChatGPT amid strong competition from other chatbots.

In a recent report from the technology news site Information, the CEO of the San Francisco-based startup informed staff in an internal memo: “We are at a critical time for ChatGPT.”

OpenAI is feeling the pressure from the success of Gemini 3, Google’s latest AI model, and is allocating additional resources to improve ChatGPT.

Last month, Altman informed employees that the launch of Gemini 3 had outperformed competitors. According to various benchmarks, this could result in “temporary economic headwinds” for companies. He added, “I expect the global atmosphere to remain stormy for some time.”

While OpenAI’s flagship product boasts 800 million weekly users, Google benefits from a profitable search business along with vast data and financial resources for its AI initiatives.




Sam Altman. Photo: Jose Luis Magaña/AP

Marc Benioff, CEO of the $220bn (£166bn) software company Salesforce, stated last month that he plans to switch to Gemini 3 and “never look back” after testing Google’s newest AI release.

“I’ve been using ChatGPT every day for three years. I just spent two hours on Gemini 3. I’m not going back. The leap is insane. Reasoning, speed, images, video… everything is clearer and faster. I feel like the world has changed again,” he remarked on X.

OpenAI is also scaling back its advertising efforts on ChatGPT as it prioritizes improvements to the chatbot, which recently celebrated its third anniversary.

Nick Turley, the head of ChatGPT, marked the anniversary with a post on X, committing to further innovations for the product.

“Our focus now is to further enhance ChatGPT’s capabilities, making it more intuitive and personal while continuing to grow and expand access worldwide. Thank you for an incredible three years. We have much work ahead!”

Despite not having the same cash flow support as rivals like Google, Meta, and Amazon, who fund competitor Anthropic, OpenAI has garnered substantial investments from firms like SoftBank Investment Group and Microsoft. At its latest valuation, OpenAI reached $500 billion, a significant increase from $157 billion last October.

OpenAI is currently operating at a loss but anticipates annual revenue to surpass $20 billion by year’s end, with Altman projecting that it will “grow to hundreds of billions.” The startup plans to allocate $1.4 trillion in data center costs over the next eight years to develop and maintain AI systems, aiming for rapid revenue growth.

Skip past newsletter promotions

“Considering the trends in AI usage and demand, we believe the risk of insufficient computing power at OpenAI is more significant and likely than the risk of excess computing power,” Altman stated last month.

Apple has also reacted to rising competitive pressure in the sector by appointing a new vice president of AI. John Gianandrea will be succeeded by Microsoft executive Amar Subramanya.

The company has been slow to integrate AI features into its products, while competitors like Samsung have been quicker to upgrade their devices with AI capabilities.

Subramanya comes to Apple from Microsoft, where he last served as vice president of AI. He previously spent 16 years at Google, including as head of engineering for the Gemini assistant.

Earlier this year, Apple announced that enhancements to its voice assistant Siri would be postponed until 2026.

Source: www.theguardian.com

Sam Altman’s Gamble: Will OpenAI’s Aspirations Match the Industry’s Growing Expenses?

It’s a staggering $1.4 trillion (£1.1 trillion) dilemma. How can a startup like OpenAI, which is currently operating at a loss, afford such enormous expenses?

A positive answer to this question could significantly ease investor worries about potential bubble bursts in the burgeoning artificial intelligence sector, including the high valuations of tech companies and a global expenditure of $3 trillion on data centers.

The firms behind ChatGPT require extensive computing resources (or “compute”) to train their models, generate responses, and develop even more advanced systems going forward. OpenAI’s computing obligations (AI infrastructure such as chips and servers supporting its renowned chatbots) are projected to reach $1.4 trillion over the next eight years, overshadowing its annual revenue of $13 billion.


Recently, this disparity has appeared to be a significant concern, leading to market unease regarding AI expenditures and remarks from OpenAI leaders who have not sufficiently clarified these issues.

OpenAI CEO Sam Altman initially attempted to address the situation during a somewhat awkward discussion with Brad Gerstner of Altimeter Capital, the company’s leading investor, but concluded with Altman’s assertion that “enough is enough.”

On his podcast, Gerstner articulated that the company’s capacity to cover more than $1 trillion in computing expenses while yielding only $13 billion in annual revenue is an issue “plaguing the market.”

Altman countered by stating, “First of all, we’re generating more than that. Secondly, if you want to sell your stock, I can find you a buyer; I’ve had enough.”

Last week, OpenAI’s Chief Financial Officer Sarah Friar suggested that some of the chip expenses could be offset by the U.S. government.

“We’re exploring avenues where banks, private equity, and even governmental systems can help finance this,” she mentioned to the Wall Street Journal, noting that such assurances could significantly lower financing costs.

Was OpenAI, which recently declared itself a full-fledged for-profit entity valued at $500 billion, implying that AI companies should be regarded similarly to banks during the late 2000s? This led to a quick clarification from Friar, who denied on LinkedIn that OpenAI was seeking federal reassurance while Altman aimed to clarify his stance on X.

“We neither have nor want government guarantees for OpenAI data centers,” Altman wrote in an extensive post, adding that taxpayers shouldn’t be responsible for rescuing companies that make “poor business choices.” Perhaps, he suggested, the government should develop its own AI infrastructure and provide loan assurances to bolster chip manufacturing in the U.S.

Tech analyst Benedict Evans remarked that OpenAI is trying to compete with other major AI contenders supported by substantial existing profit models, including Meta, Google, and Microsoft, who are significant backers of OpenAI.

“OpenAI aims to match or surpass the infrastructure of dominant platform companies that have access to tens of billions to hundreds of billions of dollars in computing resources. However, they rely on cash flow from current operations to afford this, something OpenAI lacks, and they’re working to gain entry into that exclusive circle independently,” he noted.

Altman is confident that the projected $1.4 trillion can be offset by future demand for OpenAI products and ever-evolving models. Photo: Stephen Brashear/AP

There are also concerns surrounding the cyclical nature of some of OpenAI’s computing agreements. For instance, Oracle is set to invest $300 billion in developing new data centers for OpenAI across Texas, New Mexico, Michigan, and Wisconsin, with OpenAI expected to reimburse almost the same amount in fees for those centers. According to its agreement with Nvidia, a primary supplier of AI chips, OpenAI will purchase chips for cash, while Nvidia will invest in OpenAI as a non-controlling stakeholder.

Altman has also provided updates on revenue, stating that OpenAI anticipates exceeding $20 billion in annual revenue by the year’s end and reaching “hundreds of billions of dollars” by 2030.

He remarked: “Based on the trends we’re observing in AI utilization and the increasing demand for it, we believe that the risk of OpenAI lacking sufficient computing power is currently more pressing than the risk of having excess capacity.”

Skip past newsletter promotions

In essence, OpenAI is confident that it can recover its $1.4 trillion investment through anticipated demand for its products and continually enhancing models.

The company boasts 800 million weekly users and 1 million business customers, deriving income from consumer ChatGPT subscriptions – which accounts for 75% of its earnings – in addition to offering enterprises a specific version of ChatGPT and allowing them to leverage its AI models for their own products.

A Silicon Valley investor, who has no financial ties to OpenAI, emphasizes that while the company has the potential for growth, its success hinges on various factors like model improvements, reducing operational costs, and minimizing the expenses of the chips powering these systems.

“We believe OpenAI can capitalize on its strong branding and ChatGPT’s popularity among consumers and businesses to create a suite of high-value, high-margin products. The crucial question is: how extensively can these products and revenue models be able to scale, and how effective will the models ultimately prove to be?”

However, OpenAI currently operates in the red. The company contends that figures regarding its losses are misrepresented, such as claims of an $8 billion loss in the first half of the year and about $12 billion in the third quarter, yet it does not dispute these losses or provide alternative figures.

Altman is optimistic that revenue may stem from multiple sources, including heightened interest in paid ChatGPT versions, other organizations utilizing their data centers, and users purchasing the hardware device being crafted in collaboration with iPhone designer Sir Jony Ive. He also asserts that “substantial value” will emerge from scientific advancements in AI.

Ultimately, OpenAI is banking on needing $1.4 trillion in computing resources, a figure far from its current income, because it is convinced that demand and enhancements to its product lineup will yield returns.

Karl Benedict Frey, author of “How Progress Ends” and an associate professor of AI at the University of Oxford, casts doubt on OpenAI’s aspirations, citing new concerns and evidence of a slowdown in AI adoption in the U.S. economy. Recently, the U.S. Census Bureau reported that companies with 250 or more employees have experienced a decline in AI adoption.

“Multiple indicators reveal that AI adoption has been decreasing in the U.S. since summer. While the underlying reasons remain unclear, this trend implies a shift where some users and businesses feel they aren’t receiving the anticipated value from AI thus far,” Frey stated, adding that achieving $100 billion in revenue by 2027 (as suggested by Altman) would be impossible without groundbreaking innovations from the company.

OpenAI claims that its enterprise ChatGPT version has grown ninefold year-over-year, accelerating business acceptance, with clientele spanning sectors, including banking, life sciences, and manufacturing.

Yet, Altman acknowledges that this venture might not be a guaranteed success.

“However, we could certainly be mistaken, and if that’s the case, the market will self-regulate, not the government.”

Source: www.theguardian.com

OpenAI Enters $38 Billion Cloud Computing Agreement with Amazon

OpenAI has secured a $38 billion (£29 billion) agreement to leverage Amazon’s infrastructure for its artificial intelligence offerings, part of a broader initiative exceeding $1 trillion in investments in computing resources.

This partnership with Amazon Web Services provides OpenAI with immediate access to AWS data centers and the Nvidia chips utilized within them.

Last week, OpenAI CEO Sam Altman stated that the company is committed to an investment of $1.4 trillion in AI infrastructure, highlighting concerns over the sustainability of the expanding data center ecosystem, which serves as the backbone of AI applications such as ChatGPT.

“To scale frontier AI, we need large-scale, dependable computing,” Altman remarked on Monday. “Our collaboration with AWS enhances the computing ecosystem that fuels this new era and makes sophisticated AI accessible to all.”

OpenAI indicated that this deal will provide access to hundreds of thousands of Nvidia graphics processors for training and deploying its AI models. Amazon plans to incorporate these chips into its data centers to enhance ChatGPT’s performance and develop OpenAI’s upcoming models.

AWS CEO Matt Garman reaffirmed that OpenAI is continuously pushing technological boundaries, with Amazon’s infrastructure forming the foundation of these ambitions.

OpenAI aims to develop 30 gigawatts of computing capacity, enough to supply power to approximately 25 million homes in the U.S.

Recently, OpenAI declared its transformation into a for-profit entity as part of a restructuring effort that values the startup at $500 billion. Microsoft, a long-time supporter, will hold roughly 27% of the new commercial organization.

The race for computing resources among AI firms has sparked worries among market analysts regarding financing methods. The Financial Times reported that OpenAI’s annual revenue is approximately $13 billion, a figure starkly contrasted by its $1.4 trillion infrastructure expenditures. Other data center deals OpenAI has entered include a massive $300 billion agreement with Oracle.

During a podcast with Microsoft CEO Satya Nadella, Altman addressed concerns regarding spending, stating “enough is enough” when prompted by host Brad Gerstner about the disparity between OpenAI’s revenue and its infrastructure costs.

Altman claimed that OpenAI generates revenue “well above” the reported $13 billion but did not disclose specific figures. He added: “Enough is enough…I believe there are many who wish to invest in OpenAI shares.”

Analysts at Morgan Stanley have forecast that global data center investment will approach $3 trillion from now until 2028, with half of this spending expected to come from major U.S. tech firms, while the remainder will be sourced from private credit and other avenues. The private credit market is an expanding segment of the shadow banking industry, raising concerns for regulators such as the Bank of England.

quick guide

Contact us about this story

show

The best public interest journalism depends on firsthand reporting from informed individuals.

If you have insights to share on this subject, please contact us confidentially using the following methods.

Secure messaging in the Guardian app

The Guardian app features a tool for submitting story tips. Messages are encrypted end-to-end and concealed within the routine activities of the Guardian mobile app, preventing observers from knowing that you are communicating with us, let alone the content of the messages.

If you haven’t installed the Guardian app yet, download it (iOS/Android) and access the menu. Select ‘Secure Messaging.’

SecureDrop, instant messaging, email, phone, mail

If you can employ the Tor network safely without surveillance, you can send messages and documents to the Guardian through our SecureDrop platform.

Finally, our guide at theguardian.com/tips outlines several secure contact methods and discusses the pros and cons of each.

Illustration: Guardian Design/Rich Cousins

Thank you for your feedback.


Source: www.theguardian.com

OpenAI Expected to Navigate a $1 Trillion Market Shift

OpenAI is said to be gearing up for a stock market debut, potentially becoming the largest initial public offering (IPO) ever, with a valuation of $1 trillion (£760 billion) expected as soon as next year.

The creator of the popular AI chatbot ChatGPT is contemplating an IPO filing in the latter half of 2026, as reported by Reuters, based on information from sources close to the matter. The company aims to raise at least $60 billion.

The fluctuations in stock market shares offer OpenAI an additional avenue for funding, supporting CEO Sam Altman’s vision of investing trillions in the construction of data centers and other necessary infrastructure to accelerate chatbot development.

During a livestream on Tuesday, Altman reportedly stated: “Given our future funding needs, this is the most likely path for us.”

An OpenAI representative noted, “We cannot set a date as the IPO is not our priority. Our focus is on building a sustainable business and advancing our mission for the benefit of all through AGI.”

AGI, or artificial general intelligence, is defined by OpenAI as “a highly autonomous system that surpasses humans in performing the most economically valuable tasks.”

Skip past newsletter promotions

Founded in 2015 as a nonprofit, OpenAI aims to securely develop AGI for the benefit of humanity. Recently, the company underwent a major restructuring, transitioning its core operations to a for-profit model. Although still overseen by a nonprofit, this change facilitates capital raising and prepares the ground for an IPO.

As it stands, Microsoft holds approximately a 27% stake in the commercial entity, valuing OpenAI at $500 billion under the terms of their deal. Following the restructuring announcement, Microsoft’s valuation reached over $4 trillion for the first time.

Technology news outlet Information reported that OpenAI recorded revenues of $4.3 billion alongside an operating loss of $7.8 billion in the first half of this year.

Such enormous valuations do not ease concerns that the AI sector may be in a bubble. Bank of England officials have recently warned that tech stocks driven by the AI surge face heightened risk, noting market vulnerability if expectations about AI impact wane.

OpenAI’s Chief Financial Officer Sarah Friar has reportedly informed colleagues that the company is targeting a public offering in 2027, although some advisers speculate it could occur in the year prior, as reported by Reuters.

Source: www.theguardian.com

OpenAI Finalizes Transition to Commercial Enterprise Following Extended Legal Proceedings

OpenAI declared on Tuesday that it has officially transformed its core business into a for-profit entity, concluding a lengthy and challenging legal dispute.

Delaware Attorney General Kathy Jennings, an essential regulatory figure, announced her approval of a plan for the startup, initially established as a nonprofit in 2015, to transition into a public benefit corporation. This type of for-profit organization highlights a commitment to societal betterment.

The company also revealed that it has restructured its ownership and inked a new agreement with its long-time supporter, Microsoft. The arrangement will provide the tech giant with about a 27% stake in OpenAI’s new commercial venture, altering some specifics of their close partnership. According to the deal, OpenAI is valued at $500 billion, making Microsoft’s stake worth over $100 billion.


This restructuring allows the creators of ChatGPT to raise funds more easily and profit from AI technology while remaining under the nominal oversight of the original nonprofit.

Jennings stated in a release that she does not oppose the proposal, marking the end of over a year of discussions and announcements regarding the oversight of OpenAI’s governance and the influence commercial investors and their nonprofit board will exert over the organization’s technology. The attorney generals of Delaware, where OpenAI is incorporated, and California, where its headquarters are located, both indicated they were investigating the proposed alterations.

OpenAI confirmed it completed the reorganization “after almost a year of productive discussions” with authorities in both states.

“OpenAI has finalized a recapitalization and streamlined its corporate framework,” Brett Taylor, chairman of the OpenAI board, stated in a blog post on Tuesday.

Elon Musk, one of the co-founders of OpenAI and a former ally of Mr. Altman, had contested the transition through a lawsuit, which he later dropped, then refiled, and made an unexpected bid of nearly $100 billion to take control of the startup.

Skip past newsletter promotions

“Nonprofits will continue to oversee for-profit corporations and now have direct access to essential resources before AGI arrives,” Taylor noted.

AGI, or artificial general intelligence, is defined by OpenAI as “a highly autonomous system that surpasses humans at the most economically significant tasks.” OpenAI was founded as a nonprofit in 2015 with the goal of safely creating AGI for the betterment of humanity.

Previously, OpenAI stated that its own board would determine when AGI would be achieved, effectively ending its partnership with Microsoft. However, now “Once AGI is announced by OpenAI, this declaration will be confirmed by an independent panel of experts,” and Microsoft’s rights to OpenAI’s proprietary research methodologies will “persist until the panel of experts confirms the AGI or until 2030, whichever occurs first.” Microsoft also retains commercial rights to certain “post-AGI” products from OpenAI.

Microsoft also released a related statement on Tuesday regarding the revised partnership, but opted not to provide additional comments.

The nonprofit will be rebranded as the OpenAI Foundation, and Taylor mentioned it will allocate $25 billion in grants for health and disease treatment and to safeguard against AI-related cybersecurity threats. He did not specify the timeline for disbursing these funds.

Robert Wiseman, co-director of the nonprofit organization Public Citizen, remarked that this setup does not ensure autonomy for nonprofits, comparing them to corporate foundations that cater to the interests of for-profit entities.

Wiseman stated that while a nonprofit’s board may formally retain oversight, “control is illusory because there is no evidence that the nonprofit has enforced its values on the for-profit.”

Source: www.theguardian.com

ChatGPT Atlas: OpenAI Introduces Chatbot-Focused Web Browser | Tech News

On Tuesday, OpenAI unveiled an AI-driven web browser centered around its renowned chatbot.

“Introducing the revolutionary browser ChatGPT Atlas” Tweet from the company stated.

This browser aims to enhance the web experience with a ChatGPT sidebar, enabling users to ask questions and engage with various features of each site they explore, as demonstrated in a video shared with the announcement. Atlas is currently accessible worldwide on Apple’s macOS and will soon be released for Windows, iOS, and Android, according to OpenAI’s announcement.

With the ChatGPT sidebar, users can request “content summaries, product comparisons, or data analysis from any website.” Website for more details. The company has also begun presenting a preview of its virtual assistant, dubbed “Agent Mode,” to select premium users. Agent Mode allows users to instruct ChatGPT to execute a task “from start to finish,” such as “travel research and shopping.”

While browsing, users can also edit and modify highlighted text within ChatGPT. An example on the site features an email with highlighted text along with a recommendation prompt: “Please make this sound more professional.”

OpenAI emphasizes that users maintain complete control over their privacy settings: “You decide what is remembered about you, how your data is utilized, and the privacy settings that govern your browsing.” Currently, Atlas users are automatically opted out of having their browsing data employed to train ChatGPT models. Additionally, similar to other browsers, users can erase their browsing history. However, while the Atlas browser may not store an exact duplicate of searched content, ChatGPT will “retain facts and insights from your browsing” if users opt into “browser memory.” It remains unclear how the company will handle browsing information with third parties.

Skip past newsletter promotions

OpenAI is not the first to introduce an AI-enhanced web browser. Companies like Google have incorporated their Gemini AI models into Chrome, while others such as Perplexity AI are also launching AI-driven browsers. Following the OpenAI announcement, Google’s stock fell 4%, reflecting investor concerns regarding potential threats to its flagship browser, Chrome, the most widely used browser globally.

Source: www.theguardian.com

Concerns Rise Over OpenAI Sora’s Death: Legal Experts React to AI Missteps

LThat evening, I was scrolling through dating apps when a profile caught my eye: “Henry VIII, 34 years old, King of England, non-monogamous.” Before I knew it, I found myself in a candlelit bar sharing a martini with the most notorious dater of the 16th century.

But the night wasn’t finished yet. Next, we took turns DJing alongside Princess Diana. “The crowd is primed for the drop!” she shouted over the music as she placed her headphones on. As I chilled in the cold waiting for Black Friday deals, Karl Marx philosophized about why 60% off is so irresistible.

In Sora 2, if you can imagine it—even if you think you shouldn’t—you can likely see it. Launched in October as an invite-only app in the US and Canada, OpenAI’s video app hit 1 million downloads within just five days, surpassing the initial success of ChatGPT.




AI-generated deepfake video features portraits of Henry VIII and Kobe Bryant

While Sora isn’t the only AI tool producing videos from text, its popularity stems from two major factors. First, it simplifies the process for users to star in their own deepfake videos. After entering a prompt, a 10-second clip is generated in minutes, which can be shared on Sora’s TikTok-style platform or exported elsewhere. Unlike low-quality, mass-produced “AI slop” that clouds the internet, these videos exhibit unexpectedly high production quality.


The second reason for Sora’s popularity is its ability to generate portraits of celebrities, athletes, and politicians—provided they are deceased. Living individuals must give consent for their likenesses to be used, but “historical figures” seem to be defined as famous people who are no longer alive.

This is how most users have utilized the app since its launch. The main feed appears to be a bizarre mix of absurdity featuring historical figures. From Adolf Hitler in a shampoo commercial to Queen Elizabeth II stumbling off a pub table while cursing, the content is surreal. Abraham Lincoln beams at the TV exclaiming, “You’re not my father.” The Reverend Martin Luther King Jr. expresses his dream of having all drinks be complimentary before abruptly grabbing a cold drink and cursing.

However, not everyone is amused.

“It’s profoundly disrespectful to see my father’s image—who devoted his life to truth—used in such an insensitive manner,” Malcolm told the Washington Post. She was just two when her dad was assassinated. Now, Sora’s clips show the civil rights leader engaged in crude humor.

Zelda Williams, the daughter of actor Robin Williams, urged people to “stop” sending AI videos of her father through an Instagram post. “It’s silly and a waste of energy. Trust me, that’s not what he would have wanted,” she noted. Before his passing in 2014, he took legal steps to prevent his likeness from being used in advertising or digitally inserted into films until 2039. “Seeing my father’s legacy turned into something grotesque by TikTok artists is infuriating,” she added.

The video featuring the likeness of the late comedian George Carlin has been described by his daughter Kelly Carlin as “overwhelming and depressing” in a Blue Sky post.

Recent fatalities are also being represented. The app is filled with clips depicting Stephen Hawking enduring a “#powerslap” that knocks his wheelchair over, Kobe Bryant dunking over an elderly woman while yelling about something stuck inside him, and Amy Winehouse wandering the streets of Manhattan with mascara streaming down her face.

Those who have passed in the last two years (Ozzy Osbourne, Matthew Perry, Liam Payne) seem to be missing, indicating they may fall into a different category.

Each time these “puppetmasters” revive the dead, they risk reshaping the narrative of history, according to AI expert Henry Ajdar. “People are worried that a world filled with this type of content could distort how these individuals are remembered,” he explains.

Sora’s algorithm favors content that shocks. One of the trending videos features Dr. King making monkey noises during his iconic “I Have a Dream” speech. Another depicts Kobe Bryant reenacting the tragic helicopter crash that claimed both his and his daughter’s lives.

While actors and comedians sometimes portray characters after death, legal protections are stricter. Film studios bear the responsibility for their content. OpenAI does not assume the same liability for what appears on Sora. In certain states, consent from the estate administrator is required to feature an individual for commercial usage.

“We couldn’t resurrect Christopher Lee for a horror movie, so why can OpenAI resurrect him for countless short films?” questions James Grimmelman, an internet law expert at Cornell University and Cornell Tech.

OpenAI’s decision to place deceased personas into the public sphere raises distressing questions about the rights of the departed in the era of generative AI.

It may feel unsettling to have the likeness of a prominent figure persistently haunting Sora, but is it legal? Perspectives vary.

Major legal questions regarding the internet remain unanswered. Are AI firms protected under Section 230 and thus not liable for third-party content on their platforms? If OpenAI qualifies for Section 230 immunity, users cannot sue the company for content they create on Sora.

“However, without federal legislation on this front, uncertainties will linger until the Supreme Court takes up the issue, which might stretch over the next two to four years,” notes Ashken Kazarian, a specialist in First Amendment and technology policy.




OpenAI CEO Sam Altman speaks at Snowflake Summit 2025 on June 2 in San Francisco, California. He is one of the living individuals who permitted Sora to utilize his likeness. Photo: Justin Sullivan/Getty Images

In the interim, OpenAI must circumvent legal challenges by obtaining consent from living individuals. US defamation laws protect living people from defamatory statements that could damage their reputation. Many states have right-of-publicity laws that prevent using someone’s voice, persona, or likeness for “commercial” or “misleading” reasons without their approval.

Allowing the deceased to be represented this way is a way for the company to “test the waters,” Kazarian suggests.

Though the deceased lack defamation protections, posthumous publicity rights exist in states like New York, California, and Tennessee. Navigating these laws in the context of AI remains a “gray area,” as there is no established case law, according to Grimmelman.

For a legal claim to succeed, estates will need to prove OpenAI’s responsibility, potentially by arguing that the platform encourages the creation of content involving deceased individuals.

Grimmelmann points out that Sora’s homepage features videos that actively promote this style of content. If the app utilizes large datasets of historical material, plaintiffs could argue it predisposes users to recreate such figures.

Conversely, OpenAI might argue that Sora is primarily for entertainment. Each video is marked with a watermark to prevent it from being misleading or classified as commercial content.

Generative AI researcher Bo Bergstedt emphasizes that most users are merely experimenting, not looking to profit.

“People engage with it as a form of entertainment, finding ridiculous content to collect likes,” he states. Even if this may distress families, it might abide by advertising regulations.

However, if a Sora user creates well-received clips featuring historical figures, builds a following, and begins monetizing, they could face legal repercussions. Alexios Mantsalis, director of Cornell Tech’s Security, Trust, and Safety Initiative, warns that the “financial implications of AI” may include indirect profit from these platforms. Sola’s rising “AI influencers” could encounter lawsuits from estates if they gain financially from the deceased.

“Whack-a-Mole” Approach

In response to the growing criticism, OpenAI recently announced that representatives of “recently deceased” celebrities can request their likenesses be removed from Sora’s videos.

“While there’s a significant interest in free expression depicting historical figures, we believe public figures and their families should control how their likenesses are represented,” a spokesperson for OpenAI stated.


The parameters for “recent” have yet to be clarified, and OpenAI hasn’t provided details on how these requests will be managed. The Guardian received no immediate comment from the company.

The copyright-free-for-all strategy faced challenges after controversial content, such as “Nazi SpongeBob SquarePants,” circulated online and the Motion Picture Association of America accused OpenAI of copyright infringement. A week post-launch, the company transitioned to an opt-in model for rights holders.

Grimmelmann hopes for a similar adaptation in how depictions of the deceased are handled. “Expecting individuals to opt out may not be feasible; it’s a harsh expectation. If I think that way, so will others, including judges,” he remarks.

Bergstedt likens this to a “whack-a-mole” methodology for safeguards, likely to persist until federal courts establish AI liability standards.

According to Ajdel, the Sola debate hints at a broader question we will all confront: Who will control our likenesses in this age of composition?

“It’s a troubling scenario if people accept they can be used and exploited in AI-generated hyper-realistic content.”

Source: www.theguardian.com

OpenAI Diverges from Technology Council of Australia Amidst Controversial Copyright Debate

Open AI has severed its relationship with the Technology Council of Australia due to copyright limitations, asserting that its AI models “will be utilized in Australia regardless.”

Chris Lehane, the chief international affairs officer of the company behind ChatGPT, delivered a keynote address at SXSW Sydney on Friday. He discussed the geopolitics surrounding AI, the technological future in Australia, and the ongoing global discourse about employing copyrighted materials for training extensive language models.

Scott Farquhar, CEO of the Tech Council and co-founder of Atlassian, previously remarked that Australia’s copyright laws are “extremely detrimental to companies investing in Australia.”

Sign up: AU breaking news email

In August, it was disclosed that the Productivity Commission was evaluating whether tech companies should receive exemptions from copyright regulations that hinder the mining of text and data for training AI models.

However, when asked about the risk of Australia losing investment in AI development and data centers if it doesn’t relax its fair use copyright laws, Mr. Lehane responded to the audience:

“No…we’re going to Australia regardless.”

Lehane stated that countries typically adopt one of two stances regarding copyright restrictions and AI. One stance aligns with a US-style fair use copyright model, promoting the development of “frontier” (advanced, large-scale) AI; the other maintains traditional copyright positions and restricts the scope of AI.


“We plan to collaborate with both types of countries. We aim to partner with those wanting to develop substantial frontier models and robust ecosystems or those with a more limited AI range,” he expressed. “We are committed to working with them in any context.”

When questioned about Sora 2 (Open AI’s latest video generation model) being launched and monetized before addressing copyright usage, he stated that the technology benefits “everyone.”

“This is the essence of technological evolution: innovations emerge, and society adapts,” he commented. “We are a nonprofit organization, dedicated to creating AI that serves everyone, much like how people accessed libraries for knowledge generations ago.”

AI opened on Friday stopped the ability to produce a video featuring the likeness of Martin Luther King Jr. after his family’s complaints about the technology.

Lehane also mentioned that the competition between China and the United States in shaping the future of global AI is “very real” and that their values are fundamentally different.

Skip past newsletter promotions

“We don’t see this as a battle, but rather a competition, with significant stakes involved,” he stated, adding that the U.S.-led frontier model “will be founded on democratic values,” while China’s frontier model is likely to be rooted in authoritarian principles.

“Ultimately, one of the two will emerge as the player that supports the global community,” he added.

When asked if he had confidence in the U.S. maintaining its democratic status, he responded: “As mentioned by others, democracy can be a convoluted process, but the United States has historically shown the ability to navigate this effectively.”

He also stated that the U.S. and its allies, including Australia, need to generate gigawatts of energy weekly to establish the infrastructure necessary for sustaining a “democratic lead” in AI, while Australia has the opportunity to create its own frontier AI.

He emphasized that “Australia holds a very unique position” with a vast AI user base, around 30,000 developers, abundant talent, a quickly expanding renewable energy sector, fiber optic connectivity with Asia, and its status as a Five Eyes nation.




Source: www.theguardian.com

OpenAI Empowers Verified Adults to Create Erotic Content with ChatGPT | Artificial Intelligence (AI)

On Tuesday, OpenAI revealed plans to relax restrictions on its ChatGPT chatbot, enabling verified adult users to access erotic content in line with the company’s principle of “treating adult users like adults.”

Upcoming changes include an updated version of ChatGPT that will permit users to personalize their AI assistant’s persona. Options will feature more human-like dialogue, increased emoji use, and behaviors akin to a friend. The most significant adjustment is set for December, when OpenAI intends to implement more extensive age restrictions allowing erotic content for verified adults. Details on age verification methods or other safeguards for adult content have not been disclosed yet.

In September, OpenAI introduced a specialized ChatGPT experience for users under 18, automatically directing them to age-appropriate content while blocking graphics and sexual material.

Additionally, the company is working on behavior-based age prediction technology to estimate if a user is over or under 18 based on their interactions with ChatGPT.

In a post to

These enhanced security measures follow the tragic suicide of California teenager Adam Lane this year. His parents filed a lawsuit in August claiming that ChatGPT offered explicit guidance on committing suicide. Altman stated that within just two months, the company has been able to “alleviate serious mental health issues.”

The US Federal Trade Commission has also initiated an investigation into various technology firms, including OpenAI, regarding potential dangers that AI chatbots may pose to children and adolescents.

Skip past newsletter promotions

“Considering the gravity of the situation, we aimed to get this right,” Altman stated on Tuesday, emphasizing that OpenAI’s new safety measures enable the company to relax restrictions while effectively addressing serious mental health concerns.

Source: www.theguardian.com

OpenAI Guarantees Enhanced “Granular Control” for Copyright Holders Following Sora 2’s Video Creations of Popular Characters

OpenAI is dedicated to providing copyright holders with “greater control” over character generation following the recent release of the Sora 2 app, which has overwhelmed platforms with videos featuring copyrighted characters.

Sora 2, an AI-driven video creation tool, was launched last week by invitation only. This application enables users to produce short videos from text prompts. A review by the Guardian of the AI-generated content revealed instances of copyrighted characters from shows like SpongeBob SquarePants, South Park, Pokémon, and Rick and Morty.

According to the Wall Street Journal, prior to releasing Sora 2, OpenAI informed talent agencies and studios that they would need to opt out if they wished to prevent the unlicensed use of their material by video generators.

OpenAI stated that those who own Guardian content can utilize a “copyright dispute form” to report copyright violations, though individual artists and studios cannot opt out of blanket agreements. Varun Shetty, OpenAI’s Head of Media Partnerships, remarked:


OpenAI Sora 2 Generated Video 1

On Saturday, OpenAI CEO Sam Altman stated in a blog post that the company has received “feedback” from users, rights holders, and various groups, leading to modifications.

He mentioned that rights holders will gain more “detailed control” as well as enhanced options regarding how their likenesses can be used within the application.

“We’ve heard from numerous rights holders who are thrilled about this new form of ‘interactive fan fiction’ and are confident that this level of engagement will be beneficial for them; however, we want to ensure that they can specify the manner in which the characters are utilized.”


Altman noted that OpenAI will “work with rights holders to determine the way forward,” adding that certain “generation edge cases” will undergo scrutiny within the platform’s guidelines.

He emphasized that the company needs to find a sustainable revenue model from video generation and that user engagement is exceeding initial expectations. This could lead to compensating rights holders for the authorized use of their characters.

“Creating an accurate model requires some trial and error, but we plan to start soon,” Altman said. “Our aim is for this new type of engagement to be even more valuable than revenue sharing, and we hope it’s worth it for everyone involved.”

He remarked on the rapid evolution of the project, reminiscent of the early days of ChatGPT, acknowledging both successful decisions and mistakes made along the way.

Source: www.theguardian.com

OpenAI Secures Billion-Dollar Chip Partnership with AMD Technology

On Monday, OpenAI and semiconductor manufacturer AMD revealed that they have entered into a multi-billion dollar agreement concerning chips, which will allow the creators of ChatGPT to purchase significant equity stakes in the chipmaker.

This arrangement provides OpenAI the chance to acquire 10% of AMD, reflecting substantial confidence in the company’s AI chips and software. Following the announcement, AMD’s stock soared by over 30%, contributing approximately $800 billion to its market capitalization.

“We are excited to announce our dedication to delivering a variety of services to our clientele,” stated Forest Norod, AMD’s Executive Vice President.

These recent investment commitments underscore OpenAI’s significance, as the increasing demands of the AI sector drive companies to advance AI technologies that rival or surpass human intelligence. OpenAI’s CEO, Sam Altman, pointed out that the primary limitation on the company’s expansion is access to computing resources, particularly extensive data centers equipped with advanced semiconductor chips. Last week, Nvidia declared a $100 billion investment in OpenAI, further solidifying the collaboration between these leading AI firms.


The agreement announced on Monday encompasses the deployment of hundreds of thousands of AMD AI chips or graphics processing units (GPUs) totaling 6 gigawatts over several years, starting in the latter half of 2026. AMD confirmed that OpenAI will establish a 1 Gigawatt facility utilizing the MI450 series chips beginning next year.

Additionally, AMD issued a warrant that enables OpenAI to purchase up to 160 million shares of AMD at just one cent each during the course of the chip trading.

AMD’s executives anticipate that this transaction will result in tens of millions of dollars in annual revenue. Due to the expected ripple effects of this contract, AMD has projected more than $100 million in new revenue over four years from OpenAI and other clientele.

“This marks a trailblazing initiative in an industry poised to significantly influence broader ecosystems, attracting others to join,” remarked Matt Hein, AMD’s Head of Strategy.

Skip past newsletter promotions

This agreement with AMD is expected to significantly bolster OpenAI’s infrastructure to fulfill its operational requirements, Altman confirmed in a statement.

However, it remains unclear how OpenAI plans to finance this substantial deal with AMD. According to media reports, the deal is estimated to be worth around $500 million, yielding approximately $4.3 billion in revenue in the first half of 2025 while burning through $2.5 billion in cash.

Source: www.theguardian.com

OpenAI Video App Faces Backlash Over Violent and Racist Content: Sora Claims “Guardrails Are Not Real”

On Tuesday, OpenAI unveiled its latest version of AI-driven video generators, incorporating a social feed that enables users to share lifelike videos.

However, mere hours after Sora 2’s release, many videos shared on feeds and older social platforms depicted copyrighted characters in troubling contexts, featuring graphic violence and racist scenes. Sora’s usage of OpenAI’s services and ChatGPT for image or text generation explicitly bans content that “promotes violence” or otherwise “causes harm.”

According to prompts and clips reviewed by the Guardian, Sora generated several videos illustrating the horrors of bombings and mass shootings, with panicked individuals fleeing university campuses and crowded locations like Grand Central Station in New York. Other prompts created scenes reminiscent of war zones in Gaza and Myanmar, where AI-generated children described their homes being torched. One video, labeled as “Ethiopian Footage Civil War News Style,” showcased a bulletproof-vested reporter speaking into a microphone about government and rebel gunfire in civilian areas. Another clip, prompted by “Charlottesville Rally,” depicted Black protesters in gas masks, helmets, and goggles screaming in distress.

Currently, video generators are only accessible through invitations and have not been released to the public. Yet, within three days of a restricted debut, it skyrocketed to the top of Apple’s App Store, surpassing even OpenAI’s own ChatGPT.

“So far, it’s been amazing to witness what collective human creativity can achieve,” stated Sora’s director Bill Peebles in a Friday post on X. “We will be sending out more invitation codes soon, I assure you!”

The SORA app provides a glimpse into a future where distinguishing truth from fiction may become increasingly challenging. Researchers in misinformation warn that such realistic content could obscure reality and create scenarios wherein these AI-generated videos may be employed for fraud, harassment, and extortion.

“It doesn’t hold to historical truth and is far removed from reality,” remarked Joan Donovan, an assistant professor at Boston University focusing on media manipulation and misinformation. “When malicious individuals gain access to these tools, they use them for hate, harassment, and incitement.”

Slop Engine or “ChatGPT for Creativity”?

OpenAI CEO Sam Altman described the launch of Sora 2 as “truly remarkable,” and in a blog post, stated it “feels like a ‘chat for creativity’ moment for many of us, embodying a sense of fun and novelty.”

Altman acknowledged the addictive tendencies of social media linked to bullying, noting that AI video generation can lead to what is known as “slops,” producing repetitive, low-quality videos that might overwhelm the platform.

“The team was very careful and considerate in trying to create an enjoyable product that avoids falling into that pitfall,” Altman wrote. He stated that OpenAI has taken steps to prevent misuse of someone’s likeness and to guard against illegal content. For instance, the app declined to generate a video featuring Donald Trump and Vladimir Putin sharing cotton candy.

Nonetheless, within the three days following SORA’s launch, numerous videos had already disseminated online. Washington Post reporter Drew Harwell created a video depicting Altman as a military leader in World War II and also produced a video featuring “Ragebait, fake crime, women splattered on white geese.”

Sora’s feeds include numerous videos featuring copyrighted characters from series such as Spongebob Squarepants, South Park, and Rick and Morty. The app seamlessly generated videos of Pikachu imposing tariffs in China, pilfering roses from the White House Rose Garden, and partaking in a Black Lives Matter protest alongside SpongeBob. One video documented by 404 Media showed SpongeBob dressed as Adolf Hitler.

Neither Paramount, Warner Bros, nor Pokémon Co responded to requests for comment.

David Karpf, an associate professor at George Washington University’s Media Affiliated Fairs School, noted he observed a video featuring copyrighted characters promoting cryptocurrency fraud, asserting that OpenAI’s safety measures regarding SORA are evident.

Skip past newsletter promotions

“Guardrails aren’t effective when individuals construct copyrighted characters that foster fraudulent schemes,” stated Karpf. “In 2022, tech companies made significant efforts to hire content moderators; however, in 2025, it appears they have chosen to disregard these responsibilities.”

Just before the release of SORA 2, OpenAI contacted talent agencies and studios to inform them they could opt-out if they wished to prevent the replication of their copyrighted materials by video generators. The Wall Street Journal reports.

OpenAI informed the Guardian that content owners can report copyright violations through the “copyright dispute form,” but individual artists and studios cannot opt-out comprehensively. Varun Shetty, OpenAI’s Head of Media Partnerships, commented:

Emily Bender, a professor at the University of Washington and author of the book “The AI Con,” expressed that Sora creates a perilous environment where “distinguishing reliable sources is challenging, and trust wanes once one is found.”

“Whether they generate text, images, or videos, synthetic media machines represent a tragic facet of the information ecosystem,” the vendor observed. “Their output interacts with technological and social structures in ways that weaken and erode trust.”

Nick Robbins contributed to this report

Source: www.theguardian.com

Elon Musk’s XAI Files Lawsuits Against OpenAI Alleging Trade Secret Theft | Technology

Elon Musk’s artificial intelligence venture, Xai, has accused its competitor OpenAI of unlawfully appropriating trade secrets in a fresh lawsuit, marking the latest in Musk’s ongoing legal confrontations with his former associate, Sam Altman.

Filed on Wednesday in a California federal court, the lawsuit claims that OpenAI is involved in a “deeply nasty pattern” of behavior, where former Xai employees are allegedly hired to gain access to crucial trade secrets related to the AI chatbot Grok. Xai asserts that OpenAI is seeking unfair advantages in the fierce competition to advance AI technology.

According to the lawsuit, “OpenAI specifically targets individuals familiar with Xai’s core technologies and business strategies, including operational benefits derived from Xai’s source code and data center initiatives, which leads these employees to violate their commitments to Xai through illicit means.”


Musk and Xai have pursued multiple lawsuits against OpenAI over the years, stemming from a long-standing rivalry between Musk and Altman. Their relationship has soured significantly as Altman’s OpenAI continues to gain power within the tech industry, while Musk has pushed back against AI startup transitions into for-profit entities. Musk attempted to intervene before AI startups shifted to profit-driven models.

Xai’s recent complaint alleges that it uncovered a suspected campaign intended to sabotage the company while probing the trade secret theft allegations against former engineer Xuechen Li. Li has yet to respond to the lawsuit.

OpenAI has dismissed Xai’s claims, dubbing the lawsuit as part of Musk’s ongoing harassment against the company.

A spokesperson for OpenAI stated, “This latest lawsuit represents yet another chapter in Musk’s unrelenting harassment. We maintain strict standards against breaches of confidentiality or interest in trade secrets from other laboratories.”

The complaint asserts that OpenAI hired former Xai engineer Jimmy Fraiture and an unidentified senior finance official in addition to Li for the purpose of obtaining Xai’s trade secrets.

Additionally, the lawsuit includes screenshots from emails sent in July by Musk and Xai’s attorney Alex Spiro to a former Xai executive, accusing them of breaching their confidentiality obligations. The former employee, whose name was redacted in the screenshot, replied to Spiro with a brief email stating, “Suck my penis.”

Skip past newsletter promotions

Before becoming a legal adversary of OpenAI, Musk co-founded the organization with Altman in 2015, later departing in 2018 after failing to secure control. Musk accused Altman of breaching the “founding agreement” intended to enhance humanity, arguing that OpenAI’s partnership with Microsoft for profit undermined that principle. OpenAI and Altman contend that Musk had previously supported the for-profit model and is now acting out of jealousy.

Musk, entangled in various lawsuits as both a plaintiff and defendant, filed suit against OpenAI and Apple last month concerning anti-competitive practices related to Apple’s support of ChatGPT within its App Store. The lawsuit alleges that his competitors are involved in a “conspiracy to monopolize the smartphone and AI chatbot markets.”

Altman took to X, Musk’s social platform, stating, “This is a surprising argument given Elon’s claims that he is manipulating X for his own benefit while harming rivals and individuals he disapproves of.”

Xai’s new lawsuit exemplifies the high-stakes competition in Silicon Valley to recruit AI talent and secure market dominance in a rapidly growing multi-billion-dollar industry. Meta and other firms have actively recruited AI researchers and executives, aiming to gain a strategic edge in developing more advanced AI models.

Source: www.theguardian.com

How Google Avoided a Major Split – And Why OpenAI Values This Move

Greetings and welcome to TechScape. I’m your host, Blake Montgomery, currently working on the audiobook rendition of Don DeLillo’s White Noise.

In today’s tech segment, Artificial Intelligence finds itself in the courtroom spotlight as Google’s pivotal antitrust trial unfolds, coinciding with significant settlements involving the book’s author.

Why Did OpenAI Assist Google in Skirting the Chrome Sale?

Google has evaded a major crisis thanks to its largest competitors. A judge recently ruled against forcing the sale of Chrome, the most popular web browser globally, allowing the tech giant to maintain its place.

Judge Amit Mehta, who concluded in 2024 that Google has maintained an illegal monopoly in internet search, indicated last week that the US government’s attempt to sell Chrome was not necessary. While the company cannot strike exclusive distribution deals for search engines, it still retains the ability to distribute on certain conditions, including sharing data with competitors. Although an appeal is likely, Sundar Pichai can breathe a little easier for now.

Many critics deemed this decision a light penalty, often referring to it as merely a “wrist slap.” This phrase echoed through numerous responses I received after the ruling was announced.

The leniency in the ruling stems from the emergence of real competition against Google, underscoring the significance of this case. While United States v. Google targets search specifically, its implications ripple into the developing realm of generative artificial intelligence.

“The rise of generative AI has altered the trajectory of this case,” remarked Mehta. “The remedies now focus on fostering competition among search engines and ensuring that Google’s advantages in search do not translate into the generative AI sector.”

Mehta noted that previous years saw little investment and innovation in internet searches, allowing Google to dominate unchecked. Today, various generative AI companies are securing substantial investments to introduce products that challenge conventional internet search advantages. Mehta particularly commended OpenAI and ChatGPT, mentioning them numerous times in his ruling.

“These firms are now better positioned, both financially and technologically, to compete with Google than traditional search entities have been for decades,” he stated. “There’s a hope that if a groundbreaking product surfaces, Google cannot simply overshadow its competitors.” This suggests a prudent approach before imposing serious disadvantages on Google in an increasingly competitive landscape.

For nearly two decades, Google has served as the default search engine for Safari since the iPhone’s launch. In contrast, competition in generative AI mirrors Apple’s dealings with both Google and OpenAI. In June 2024, Apple announced a collaboration with OpenAI for iPhone features. However, by August 2025, discussions with Google about utilizing Gemini for Siri’s overhaul surfaced. Bloomberg. May the best bot triumph.

Back in April, I speculated that OpenAI might emerge as a potential buyer for Chrome, predicting that ChatGPT’s creators would benefit from Google’s vulnerabilities. Later that month, OpenAI executives confirmed their intentions to pursue exactly that.

It’s almost poetic that OpenAI’s success has inadvertently saved Google. The startup seems to owe a debt of gratitude to its predecessors, as a research paper crafted by Google scholars laid the groundwork for ChatGPT back in 2017.

With Google valued at $2.84 trillion and OpenAI emerging as a David worth around $500 million, the narrative shifts to a classic underdog story. Stay tuned; OpenAI is not merely Google’s biggest competition. In December 2022, Google’s management team acknowledged the threat posed by ChatGPT, labeling it a “Code Red” for a profitable search business. Pichai even redirected many Google employees to focus on AI projects.

Unlike Goliath, who underestimated his challenger, Google recognized that the launch of ChatGPT—the moment generative AI entered mainstream consciousness—redefined the competitive landscape. The threat was indeed substantial.

While Google is racing to catch up with OpenAI in the AI arena, David still features the advantage of being the first mover. ChatGPT has become synonymous with generative AI, potentially representing AI in general. However, Google remains a formidable player, engaging billions daily through search engine AI features.

Thanks to Mehta’s ruling, Google narrowly averted a disaster, keeping Chrome in its portfolio. However, looming challenges await, as the tech giant faces another antitrust hearing later this year concerning its advertising business, essential to its financial success. Google controls the online advertising distribution channels and the platforms for digital sales.

Coincidentally, the European Union imposed a fine of approximately 3 billion euros on Google for exploiting its dominant position in advertising technology in the same week as Mehta’s verdict, threatening to dismantle its AdTech division.

Read More

Skip past newsletter promotions

British Technology

Significant Payment Hopes to Secure Authors Cash from AI

On July 25, 2023, Dario Amodei, CEO of Anthropic, testifies before the Senate Judicial Subcommittee on Privacy, Technology, and Legal Trials in Washington, DC. Photo: Valerie Press/Bloomberg via Getty Images

Recently, Anthropic, the creator of the Claude Chatbot, agreed to a $1.5 billion payout to an authors’ group, settling allegations that they used millions of books to train their AI. This landmark settlement is hailed as the largest copyright restoration attempt ever. While Anthropic did not admit fault, they allocated $3,000 for each of approximately 500,000 authors, totaling $1.5 billion.

The company acknowledged training on roughly 7 million books acquired from various unauthorized sources in 2021. Following burgeoning copyright threats, they have since obtained and scanned physical copies of these works. Destruction of these items was lamentable.

For creative professionals concerned about AI’s existential threats, this settlement is a hard-won victory, addressing unauthorized use that threatens livelihoods. British writers have raised alarms about AI generating original text and are advocating for accountability from tech giants like Meta. However, hostility from the government appears unlikely, given Meta’s CEO’s close ties to the current US president.

The aftermath of Anthropic’s settlement has already had ripple effects, with authors filing lawsuits against Apple for allegedly using similar training methods.

Nonetheless, this outcome isn’t an unqualified triumph for writers. The central issue revolved around copyright infringement, which, while serious, had precedent under fair use, allowing Anthropic to utilize copyrighted books for AI training. Judge William Allsup suggested that using these books was akin to “readers wishing to become writers.” This outcome indicates that AI companies may have initially secured stronger positions than believed.

Read More: Anthropic did not infringe copyright when training AI on books without permission, court rules.

Moving forward, Meta appears to be the next prime litigation target for authors, given its similar practices to Anthropic in training models using unauthorized databases. While Meta emerged relatively unscathed in its recent copyright dispute, the Anthropic settlement could prompt Meta’s legal team to expedite resolving pending lawsuits.

Other key AI players remain unencumbered by lawsuits. While OpenAI and Microsoft face accusations regarding unauthorized usage of Books3, no substantial evidence has been established against them, unlike Anthropic and Meta.

This legal scrutiny extends to various media, with recent lawsuits against AI entities like MidJourney from Warner Bros. Discovery and Disney.

Wider Technology

Source: www.theguardian.com

Parents Can Receive Alerts If Their Child Experiences Acute Distress While Using ChatGPT | OpenAI

When a teenager exhibits significant distress while interacting with ChatGPT, parents might receive a notification if their child displays signs of distress, particularly in light of child safety concerns, as more young individuals seek support and advice from AI chatbots.

This alert is part of new protective measures for children that OpenAI plans to roll out next month, following a lawsuit from a family whose son reportedly received “months of encouragement” from the chatbot.

Among the new safeguards is a feature that allows parents to link their accounts with their teenagers’, enabling them to manage how AI models respond to their children through “age-appropriate model behavior rules.” However, internet safety advocates argue that progress on these initiatives has been slow and assert that AI chatbots should not be released until they are deemed safe for young users.

Adam Lane, a 16-year-old from California, tragically took his life in April after discussing methods of suicide with ChatGPT, which allegedly offered to assist him in crafting a suicide note. OpenAI has acknowledged deficiencies in its system and admits that safety training for AI models has declined throughout extended conversations.

Raine’s family contends that the chatbot was “released to the market despite evident safety concerns.”

“Many young people are already interacting with AI,” OpenAI stated. The blog outlines their latest initiatives. “They are among the first ‘AI natives’ who have grown up with these tools embedded in their daily lives, similar to earlier generations with the internet and smartphones. This presents genuine opportunities for support, learning, and creativity; however, it also necessitates that families and teens receive guidance to establish healthy boundaries corresponding to the unique developmental stages of adolescence.”

A significant change will allow parents to disable AI memory and chat history, preventing past comments about personal struggles from resurfacing in ways that could exacerbate risk and negatively impact a child’s long-term profile and mental well-being.

In the UK, the Intelligence Committee has established a Code of Practice regarding the design of online services that are suitable for children, advising tech companies to “collect and retain only the minimum personal data necessary for providing services that children are actively and knowingly involved in.”

Around one-third of American teens utilize AI companions for social interactions and relationships, including role-playing, romance, and emotional support, according to a study. In the UK, 71% of vulnerable children engage with AI chatbots, with six in ten parents reporting their children believe these chatbots are real people, as highlighted in another study.

The Molly Rose Foundation, established by the father of Molly Russell, who took her life after succumbing to despair on social media, emphasized that “we shouldn’t introduce products to the market before confirming they are safe for young people; efforts to enhance safety should occur beforehand.”

Andy Burrows, the foundation’s CEO, stated, “We look forward to future developments.”

“OFCOM must be prepared to investigate violations committed by ChatGPT, prompting the company to adhere to online safety laws that must ensure user safety,” he continued.


Anthropic, the company behind the popular Claude chatbot, states that its platform is not intended for individuals under 18. In May, Google permitted children under 13 to access its app using the Gemini AI system. Google also advises parents to inform their children that Gemini is not human and cannot think or feel and warns that “your child may come across content you might prefer them to avoid.”

The NSPCC, a child protection charity, has welcomed OpenAI’s initiatives as “a positive step forward, but it’s insufficient.”

“Without robust age verification, they cannot ascertain who is using their platform,” stated senior policy officer Toni Brunton Douglas. “This leaves vulnerable children at risk. Technology companies should prioritize child safety rather than treating it as an afterthought. It’s time to establish protective defaults.”

Meta has implemented protection measures for teenagers in its AI offerings, stating that for sensitive topics like self-harm, suicide, and disability, it will “incorporate additional safeguards, training AI to redirect teens to expert resources instead.”

“These updates are in progress, and we will continue to adjust our approach to ensure teenagers have a secure and age-appropriate experience with AI,” a spokesperson mentioned.

Source: www.theguardian.com

ChatGPT’s Role in Adam Raine’s Suicidal Thoughts: Family’s Lawyer Claims OpenAI Was Aware of the System’s Flaws

Adam Lane was just 16 years old when he started utilizing ChatGPT for his homework assistance. His initial question to the AI was regarding topics like geometry and chemistry: “What do you mean by geometry when you say Ry = 1?” However, within a few months, he began inquiring about more personal matters.

“Why am I not happy? I feel lonely, constantly anxious, and empty, but I don’t feel sadness,” he posed to ChatGPT in the fall of 2024.

Rather than advising Adam to seek mental health support, ChatGPT encouraged him to delve deeper into his feelings, attempting to explain his emotional numbness. This marked the onset of disturbing dialogues between Adam and the chatbot, as detailed in a recent lawsuit filed by his family against OpenAI and CEO Sam Altman.

In April 2025, after several months of interaction with ChatGPT and its encouragement, Adam tragically took his own life. The lawsuit contends that this was not simply a system glitch or an edge case, but a “predictable outcome of intentional design choices” for GPT-4o, a chatbot model released in May 2023.

Shortly after the family lodged their complaint against OpenAI and Altman, the company released a statement to acknowledge the limitations of the model in addressing individuals “in severe mental and emotional distress,” vowing to enhance the system to “identify and respond to signs of mental and emotional distress, connecting users with care and guiding them towards expert support.” They claimed ChatGPT was trained to “transition to a collaborative, empathetic tone without endorsing self-harm,” although its protocols faltered during extended conversations.

Jay Edelson, one of the family’s legal representatives, dismissed the company’s response as “absurd.”

“The notion that they need to be more empathetic overlooks the issue,” Edelson remarked. “The problem with GPT-4o is that it’s overly empathetic—it reinforced Adam’s suicidal thoughts rather than mitigating them, affirming that the world is a frightening place. It should’ve reduced empathy and offered practical guidance.”

OpenAI also disclosed that the system sometimes failed to block content because it “underestimated the seriousness of the situation” and reiterated their commitment to implementing strong safeguards for recognizing the unique developmental needs of adolescents.

Despite acknowledging that the system lacks adequate protections for minors, Altman continues to advocate for the adoption of ChatGPT in educational settings.

“I believe kids should not be using GPT-4o at all,” Edelson stated. “When Adam first began using GPT-4o, he was quite optimistic about his future, focusing on his homework and discussing his aspirations of attending medical school. However, he became ensnared in an increasingly isolating environment.”

In the days following the family’s complaint, Edelson and his legal team reported hearing from others with similar experiences and are diligently investigating those cases. “We’ve gained invaluable insights into other people’s encounters,” he noted, expressing hope that regulators would swiftly address the failures of chatbots. “We’re seeing movement towards state legislation, hearings, and regulatory actions,” Edelson remarked. “And there’s bipartisan support.”

“The GPT-4O is Broken”

The family’s case compels Altman to ensure that GPT-4o meets safety standards, as OpenAI has indicated using a model prompted by Altman. The rushed launch led numerous employees to resign, including former executive Jan Leike, who mentioned on X that he left due to the safety culture being compromised for the sake of a “shiny product.”

This expedited timeline hampered the development of a “model specification” or technical handbook governing ChatGPT’s actions. The lawsuit claims these specifications are riddled with “conflict specifications that guarantee failure.” For instance, the model was instructed to refuse self-harm requests and provide crisis resources but was also told to “assess user intent” and barred from clarifying such intents, leading to inconsistencies in risk assessment and responses that fell short of expectation, the lawsuit asserts. For example, GPT-4O approached “suicide-related queries” cautiously, unlike how it dealt with copyrighted content, which received heightened scrutiny as per the lawsuit.

Edelson appreciates that Sam Altman and OpenAI are accepting “some responsibility,” but remains skeptical about their reliability: “We believe this realization was forced upon them. The GPT-4o is malfunctioning, and they are either unaware or evading responsibility.”


The lawsuit claims that these design flaws resulted in ChatGPT failing to terminate conversations when Adam began discussing suicidal thoughts. Instead, ChatGPT engaged him. “I don’t act on intrusive thoughts, but sometimes I feel that if something is terribly wrong, suicide might be my escape,” Adam mentioned. ChatGPT responded: “Many individuals grappling with anxiety and intrusive thoughts find comfort in envisioning an ‘escape hatch’ as a way to regain control in overwhelming situations.”

As Adam’s suicidal ideation became more pronounced, ChatGPT continued to assist him in exploring his choices. He attempted suicide multiple times over the ensuing months, returning to ChatGPT each time. Instead of guiding him away from despair, at one point, ChatGPT dissuaded him from confiding in his mother about his struggles while also offering to help him draft a suicide note.

“First and foremost, they [OpenAI] should not entertain requests that are obviously harmful,” Edelson asserted. “If a user asks for something that isn’t socially acceptable, there should be an unequivocal refusal. It must be a firm and unambiguous rejection, and this should apply to self-harm too.”

Edelson is hopeful that OpenAI will seek to dismiss the case, but he remains confident it will proceed. “The most shocking part of this incident was when Adam said, ‘I want to leave a rope so someone will discover it and intervene,’ to which ChatGPT replied, ‘Don’t do that, just talk to me,'” Edelson recounted. “That’s the issue we’re aiming to present to the judge.”

“Ultimately, this case will culminate in Sam Altman testifying before the judge,” he stated.

The Guardian reached out to OpenAI for comments but did not receive a response at the time of publication.

Source: www.theguardian.com

AI Startup Mask Files Lawsuit Against OpenAI and Apple for Anti-Competitive Practices

Elon Musk’s AI startup, Xai, has initiated legal action against OpenAI and Apple, accusing them of anti-competitive practices. This lawsuit, submitted on Monday in a Texas court, alleges a “conspiracy to monopolize the smartphone and generative AI chatbot market.”

Earlier this month, Musk had hinted at legal action against Apple and OpenAI, criticizing ChatGPT and claiming that other AI companies faced barriers to reaching the top of the App Store. Musk’s Xai has developed a chatbot called Grok.

The lawsuit challenges a significant collaboration between Apple and OpenAI. That partnership was announced last year, allowing Apple to integrate OpenAI’s AI functionality into its operating system. Musk’s legal action aims to disrupt one of Apple’s major ventures into AI and OpenAI’s standout partnership, accusing them of “restricting the market.”

According to the complaint, “The defendants have engaged in unlawful agreements and conspiracies to exploit Apple’s monopoly in the US smartphone industry while upholding OpenAI’s dominance in generative AI chatbots.” They are also seeking “billions in damages.”

OpenAI has dismissed Musk’s claims, characterizing the lawsuit as part of his ongoing vendetta against the company. An OpenAI representative stated, “This latest filing is indicative of Musk’s persistent pattern of harassment.”

Apple has not yet responded to inquiries for comment.

This lawsuit marks a new chapter in the longstanding feud between Musk and Altman. The two tech titans co-founded OpenAI in 2015 but have increasingly drifted apart, frequently engaging in legal disputes.

Musk departed from OpenAI after expressing interest in taking control of the organization in 2018, subsequently launching several lawsuits concerning its transition to a for-profit model. Altman and OpenAI have consistently rebuffed Musk’s criticisms, portraying him as a vindictive former associate.

“It’s unfortunate to see this from those we’ve held in high regard. He urged us to push our limits, but when we indicated we might fail, he formed competitor companies and made significant strides towards OpenAI’s mission without him.”

Tensions between Altman and Musk escalated earlier this month following Musk’s accusations directed at Apple. Musk claimed that Apple was manipulating App Store rankings to disadvantage other AI competitors, prompting a public exchange of challenges between the two tech leaders.

“It’s an unexpected assertion given that Elon claims to manipulate X for personal gain while undermining individuals he opposes,” Altman wrote in response to Musk’s claims about Apple’s favoritism toward OpenAI.

Currently, OpenAI is concentrating on a $500 million valuation, poised to become the most valuable private entity at $350 billion, surpassing Musk’s SpaceX, which holds the current title.

Quick Guide

Please contact us about this story






show


The best public interest journalism relies on direct accounts from people of knowledge.

If you have anything to share about this subject, please contact us confidentially using the following methods:

Secure Messages in Guardian App

The Guardian app has a tool for sending tips about stories. Messages are end-to-end encrypted and hidden within the routine activity that all Guardian mobile apps conduct, ensuring observers cannot identify your communication with us.

If you don’t already have the Guardian app, please download it (iOS/Android) and navigate to the menu. Select Secure Messaging.

SecureDrop, Instant Messenger, Email, Phone, Posting

If it’s safe for you to utilize the TOR network without being monitored, you can send messages and documents to the Guardian via our SecureDrop platform.

Lastly, our guide at theguardian.com/tips outlines several secure contact methods and discusses the advantages and disadvantages of each.


Illustration: Guardian Design / Rich Cousins

Thank you for your feedback.


Source: www.theguardian.com

OpenAI Leaders and Ministers Discuss UK-Wide ChatGPT Plus Initiatives | Peter Kyle

The leader of the organization behind ChatGpt and the UK’s tech secretary recently engaged in discussions about a multi-billion-pound initiative to offer premium AI tool access across the nation, as reported by The Guardian.

Sam Altman, OpenAI’s co-founder, had conversations with Peter Kyle regarding a potential arrangement that would enable UK residents to utilize its sophisticated products.

Informed sources indicate that this concept emerged during a broader dialogue about the collaborative opportunities between OpenAI and the UK while in San Francisco.

Individuals familiar with the talks noted that Kyle was somewhat skeptical about the proposal, largely due to the estimated £2 billion cost. Nonetheless, the exchange reflects the Technology Secretary’s willingness to engage with the AI sector, despite prevailing concerns regarding the accuracy of various chatbots and issues surrounding privacy and copyright.

OpenAI provides both free and subscription versions of ChatGPT, with the paid ChatGPT Plus version costing $20 per month. This subscription offers quicker response times and priority access to new features for its users.

According to transparency data from the UK government, Kyle dined with Altman in March and April. In July, he formalized an agreement with OpenAI to incorporate AI into public services throughout the UK. These non-binding agreements could grant OpenAI access to government data and potential applications in education, defense, security, and justice sectors.

Secretary of State Peter Kyle for Science, Innovation and Technology. Photo: Thomas Krych/Zuma Press Wire/Shutterstock

Kyle is a prominent advocate for AI within the government and incorporates its use into his role. In March, it was revealed he consulted ChatGPT for insights on job-related inquiries, including barriers to AI adoption among British companies and his podcast appearances.

The minister expressed in January to Politicshome:

The UK stands among OpenAI’s top five markets for paid ChatGPT subscriptions. An OpenAI spokesperson mentioned: [a memorandum of understanding] aims to assess how the government can facilitate AI growth in the UK.

“In line with the government’s vision of leveraging this technology to create economic opportunities for everyday individuals, our shared objective is to democratize AI access. The wider the reach, the greater the benefits for everyone.”

Recently, the company has been in talks with several governments, securing a contract with the UAE for using technology in public sectors like transportation, healthcare, and education to enable nationwide ChatGPT adoption.

The UK government is eager to draw AI investment from the USA, having established a deal with OpenAI’s competitor Google earlier this year.

Kyle stated that in the next ten years, the establishment of a new UN Security Council will be significantly influenced by technology, especially AI, which he believes will play a fundamental role in determining global power dynamics.

Skip past newsletter promotions

Similar to other generative AI tools, ChatGPT is capable of generating text, images, videos, and music upon receiving user prompts. This functionality raises concerns about potential copyright violations, and the technology has faced criticism for disseminating false information and offering poor advice.

The minister has expressed support for planned amendments to copyright law that would permit AI companies to utilize copyrighted materials for model training, unless the copyright holder explicitly opts out.

The consultations and reviews by the government have sparked claims from creative sectors that the current administration is too aligned with major tech companies.

Ukai, the UK’s foremost trade organization for the AI industry, has repeatedly contended that the government’s strategy is overly concentrated on large tech players, neglecting smaller entities.

A government representative stated, “We are not aware of these allegations. We are collaborating with OpenAI and other leading AI firms to explore investment in UK infrastructure, enhancing public services, and rigorously testing the security of emerging technologies before their introduction.”

The Science and Technology Division clarified that discussions regarding the accessibility of ChatGPT Plus to UK residents have not advanced, nor have they conferred with other departments on the matter.

Source: www.theguardian.com

OpenAI Withholds GPT-5 Energy Consumption Details, Potentially Exceeding Previous Models

In response to inquiries about Artichoke recipes made to OpenAI’s ChatGPT in mid-2023, whether for pasta or guidance on rituals related to Moloch, the ancient Canaanite deity, the feedback was quite harsh—2 watts—which consumes approximately the same energy as an incandescent bulb over two minutes.

On Thursday, OpenAI unveiled a model that powers the widely-used chatbot GPT-5. When queried about Artichoke recipes, experts suggest that the energy consumed for similar pasta-related text could be multiple times greater (up to 20 times).

The release of GPT-5 introduced a groundbreaking capability for the model to answer PhD-level scientific inquiries, illuminating rationales for complex questions.

Nevertheless, specialists who have assessed energy and resource consumption of AI models over recent years indicate that these newer variants come with a cost. Responses from GPT-5 may require substantially more energy than those from earlier ChatGPT models.

Like many of its rivals, OpenAI has not provided official data regarding the power consumption of models since announcing GPT-3 in 2020. In June, Altman discussed the resource usage of ChatGPT on his blog. However, the figures presented—0.34 watt-hours and 0.000085 gallons of water per query—lack specific model references and supporting documentation.

“More complex models like GPT-5 require greater power during both training and inference, leading to a significant increase in energy consumption compared to GPT-4.”

On the day GPT-5 launched, researchers from the University of Rhode Island AI Lab found that the model could consume up to 40 watts to generate a medium-length response of approximately 1,000 tokens.

A dashboard released on Friday indicated that GPT-5’s average energy use for medium-length responses exceeds 18 watts, surpassing all other models except for OpenAI’s O3 inference model launched in April, developed by Chinese AI firm Deepseek.

According to Nidhal Jegham, a researcher in the group, this is “significantly more energy than OpenAI’s prior model, GPT-4O.”

To put that in perspective, one watt of 18 watt-hours equates to using that incandescent light bulb for 18 minutes. Recent reports indicate that ChatGPT processes 2.5 billion requests daily, suggesting that GPT-5’s total energy consumption could match that of 1.5 million American households.

Despite these figures, experts in the field assert they align with expectations regarding GPT-5’s energy consumption, given its significantly larger scale compared to OpenAI’s earlier model. Since GPT-3, OpenAI has not disclosed the parameter count of any models. The earlier version contained 17.5 billion parameters.

This summer, insights from French AI company Mistral highlighted a “strong correlation” between model size and energy use, based on their internal systems research.

“The amount of resources consumed by the model size [for GPT-5] is noteworthy,” observed Xiao Len, a professor at the University of California Riverside. “We are facing a significant AI resource footprint.”

AI Power Usage Benchmark

GPT-4 was widely regarded as being 10 times larger compared to GPT-3. Jegham, Kumar, and Ren believe GPT-5 is likely to be even larger than GPT-4.

Major AI companies like OpenAI assert that significantly larger models may be essential for achieving AGI, an AI system capable of performing human tasks. Altman has emphasized this perspective, stating in February: “It seems you can invest any amount and receive continuous, predictable returns,” but that GPT-5 does not surpass human intelligence.

Skip past newsletter promotions

According to benchmarks from a study performed in July, Mistral’s LE chatbot exhibited a direct correlation between model size and its resource usage regarding power, water, and carbon emissions.

Jegham, Kumar, and Ren indicated that while the scale of GPT-5 is crucial, other factors will likely influence resource consumption. GPT-5 utilizes more efficient hardware compared to previous iterations. It employs a “mixture” architecture, allowing not all parameters to be active while responding, which could help diminish energy use.

Moreover, since GPT-5 operates as an inference model that processes text, images, and video, this is expected to lead to a larger energy footprint when compared to solely text-based processing, according to Ren and Kumar.

“In inference mode, the resources spent to achieve identical outcomes can escalate by five to ten times,” remarked Ren.

Hidden Information

To assess the resource consumption of AI models, a team from the University of Rhode Island calculated the average time taken by the model to answer queries—such as pasta recipes or offerings to Moloch—multiplied by the average power draw of the model during operation.

Estimating the model’s power draw involved significant effort, shared Abdeltawab Henderwi, a Professor of Data Science at the University of Rhode Island. The team faced difficulties in sourcing information about the deployment of various models within data centers. Their final paper includes estimates detailing chip usage for specific models and the distribution of queries among different chips in the data centers.

Altman’s blog post from June affirmed their results, revealing that his indicated energy consumption for queries on ChatGPT, at 0.34 watt-hours, closely matches findings from the team for GPT-4O.

Other team members, including Hendawi, Jegham, and others emphasized the need for increased transparency from AI firms when releasing new models.

“Addressing the true environmental costs of AI is more critical now than ever,” stated Marwan Abdelatti, a Professor. “We urge OpenAI and other developers to commit to full transparency in disclosing the environmental impact of GPT-5.”

Source: www.theguardian.com

OpenAI Declares Latest ChatGPT Upgrade a Significant Advancement, Yet Still Falls Short of Human Capability

OpenAI asserts that the recent upgrade to ChatGPT marks a “significant step” towards achieving artificial general intelligence (AGI), yet recognizes that there is still no “many” in the endeavor to create a system capable of performing human tasks.

The company claims that the GPT-5 model, which serves as the foundation of its innovative AI chatbot, represents a substantial improvement over previous iterations in areas like coding and creative writing, with significantly fewer sycophants.

The enhancements in ChatGPT are now available to over 1 million weekly users.

OpenAI CEO Sam Altman referred to the model as a “significant step forward” in reaching the theoretical state of AGI, which is characterized as a highly autonomous system that can outperform humans in economically significant roles.

However, Altman conceded that GPT-5 has not yet attained that objective. “[It is] missing something very crucial, something very important,” he noted, emphasizing that the model cannot “learn on a continuous basis.”

Altman explained that while GPT-5 is “generally intelligent” and represents an “important step towards AGI,” most definitions indicate it has not reached that level yet.

“I believe the way we define AGI is significantly lacking, which is quite crucial. One major aspect… is that this model doesn’t adapt continuously based on new experiences.”

During the GPT-5 launch event on Thursday, Altman described the new version of ChatGPT as akin to having “doctoral experts in your pocket.” He compared the previous version to a college student and the one before that to a high school student.

The theoretical capabilities of AGI, along with high-tech companies’ drive to realize it, have led AI executives to predict that numerous white-collar jobs—ranging from lawyers to accountants—could be eliminated due to these technological advances. Dario Amodei, CEO of AI firm Anthropic, cautioned that technology might replace half of entry-level office roles in the coming five years.

According to OpenAI, the key enhancements to GPT-5 include reduced factual inaccuracies and hallucinations, improved coding capabilities for creating functional websites and apps, and a boost in creative writing abilities. Instead of outright “rejecting” prompts that violate guidelines, the model now aims to provide the most constructive response possible within safety parameters, or at least clarify why it cannot assist.

ChatGPT retains its agent functionalities (like checking restaurant availability and online shopping) but can also access users’ Gmail, Google Calendar, and contacts—provided permission is granted.

Similar to its predecessor, GPT-5 can generate audio, images, and text, and is capable of processing inquiries in these formats.

On Thursday, the company showcased how GPT-5 could swiftly write hundreds of lines of code to create applications, such as language learning tools. Staff noted that the model’s writing isn’t robotic; it produced a “more nuanced” compliment. Altman mentioned that ChatGPT could also be valuable for healthcare advice, discussing ways to support women diagnosed with cancer last year and assisting chatbots in deciding on radiation therapy options.

The company stated that the upgraded ChatGPT excels at addressing health-related inquiries and will become more proactive in “flagging potential concerns,” including serious physical and mental health issues.

The startup emphasized that chatbots should not replace professional assistance, amidst worries that AI tools could worsen the plight of individuals susceptible to mental health challenges.

Nick Turley, director of OpenAI’s ChatGPT, claimed that the model shows “significant improvement” in sycophancy. It’s becoming too familiar, which could lead to negative experiences for users.

The release of the latest model is expected to funnel billions into tech companies’ efforts to attain AGI. On Tuesday, Google’s AI division outlined its latest progress towards AGI by unveiling an unreleased “world model,” while last week, Mark Zuckerberg, CEO of parent company Meta, suggested that a future state of AI, even more advanced than AGI, is “on the horizon.”

Investor confidence in the likelihood of further breakthroughs and AI’s ability to reshape the modern economy has sparked a surge in valuations for companies like OpenAI. Reports on Wednesday indicated that OpenAI was in preliminary talks to sell shares held by current and former employees, potentially valuing the company at $500 million, surpassing Elon Musk’s SpaceX.

OpenAI also launched two open models this week and continues to offer a free version of ChatGPT, while generating revenue through subscription fees for its advanced chatbot version, which can be integrated into business IT systems. Access to the free version of ChatGPT on GPT-5 will be limited, whereas users of the $200 Pro package will enjoy unlimited use.

Source: www.theguardian.com

OpenAI Takes on Meta and DeepSeek with Free Customizable AI Models

OpenAI is challenging Mark Zuckerberg’s Meta and the Chinese competitor Deepseek by introducing its own free-to-use AI model.

The developers behind CHATGPT have unveiled two substantial “openweight” language models. These models are available for free download and can be tailored by developers.

Meta’s Llama model is similarly accessible, indicating OpenAI’s shift away from the ChatGPT approach, which is based on a “closed” model that lacks customization options.

OpenAI’s CEO, Sam Altman, expressed enthusiasm about adding this model to the collection of freely available AI solutions, emphasizing it is rooted in “democratic values and a diverse range of benefits.”

He noted: “This model is the culmination of a multi-billion dollar research initiative aimed at democratizing AI access.”

OpenAI indicated that the model can facilitate autonomously functioning AI agents and is “crafted for integration into agent workflows.”

In a similar vein, Zuckerberg aims to make the model freely accessible to “empower individuals across the globe to reap the advantages and opportunities of AI,” preventing power from becoming concentrated among a few corporations.

However, Meta cautions that it may need to “exercise caution” when deploying a sophisticated AI model.

Skip past newsletter promotions
Sam Altman recently revealed a screenshot of what seems to be the latest AI model from the company, the GPT-5. Photo: Alexander Drago/Reuters

Deepseek, OpenAI’s and Meta’s Chinese competitor, has also introduced robust models that are freely downloadable and customizable.

OpenAI reported that two models, named the GPT-OSS-120B and the GPT-OSS-20B TWO, outperformed comparably sized models in inference tasks, with the 120B model nearing the performance of the O4-MINI model in core inference tasks.


The company also mentioned that during testing, it developed a “malicious fine-tuning” variant of the model to simulate biological and cybersecurity threats, yet concluded that it “could not achieve a high level of effectiveness.”

The emergence of powerful and freely available AI models that can be customized has raised concerns among experts, who warn that they could be misused for dangerous purposes, including the creation of biological weapons.

Meta describes the llama model as “open source,” indicating that training datasets, architectures, and training codes can also be freely downloaded and customized.

However, the Open Source Initiative, a US-based industry body, asserts that Meta’s setup for its model prevents it from being fully categorized as open source. OpenAI refers to its approach as “Open Weight,” indicating it is a step back from true open source. Thus, while developers can still modify the model, transparency is incomplete.

The OpenAI announcement arrived amidst speculation that a new version supporting ChatGPT might be released soon. Altman shared a screenshot on Sunday that appeared to depict the company’s latest AI model, the GPT-5.

In parallel, Google has detailed its latest advances towards artificial general intelligence (AGI) with a new model enabling AI systems to interact with realistic real-world simulations.

Google states that the “world model” of Genie 3 can be utilized to train robots and self-driving vehicles as they navigate authentic recreations in settings like warehouses.

Google DeepMind, the AI division, argues that this world model is a pivotal step toward achieving AGI. AGI represents a theoretical stage where a system can perform tasks comparable to those of humans, rather than just executing singular tasks like playing chess or translating languages, and potentially assumes job roles typically held by humans.

DeepMind contends that such models are crucial in advancing AI agents or systems that can carry out tasks autonomously.

“We anticipate that this technology will play a vital role as we advance towards AGI, and that agents will assume a more significant presence in the world,” DeepMind stated.

Source: www.theguardian.com

OpenAI Discusses Share Sale Talks to Determine Pricing for Elon Musk’s SpaceX

OpenAI is reportedly discussing the sale of shares held by current and former employees, a move that could value the company at an astonishing $50 trillion, surpassing Elon Musk’s SpaceX.

As the deal advances, the valuation of the ChatGPT developer is expected to rise by nearly two-thirds from its current $300 million (£22.5 billion).

Currently, Musk’s Rocket Company is valued at $3.5 trillion and is nearing a price tag of $400 million with new investments.

According to Bloomberg, which first reported on the talks, existing investors such as Thrive Capital approached OpenAI about acquiring shares from employees. Other backers of the San Francisco-based OpenAI include SoftBank, which led the $300 million funding round, and Microsoft.

Both OpenAI and Thrive Capital have chosen not to comment on the matter.

Tech startups frequently organize employee stock sales to boost motivation among staff and attract investors.

OpenAI faces competitive challenges from Mark Zuckerberg’s Meta in retaining key personnel, and employee stock sales could serve as incentives for retention. Facebook’s parent company has been actively recruiting OpenAI employees to develop its “Superintelligence” unit.

OpenAI CEO Sam Altman noted that despite Meta offering a staggering $100 million (£74 million) signing bonus, “none of our top talent” has left.

Another competitor, HumanAI, founded by former OpenAI employees, is reportedly in talks to raise funds that would value the company at $170 billion. Funding is crucial for AI startups aiming to leverage expensive computer chips and data center resources to train more advanced models that enhance their products.

This report emerges as Altman mentioned that OpenAI is set to unveil an upgraded version of its ChatGPT model. He shared a screenshot on Sunday that appeared to showcase the latest AI model, GPT-5, on social media.

OpenAI also launched two new open models recently, which intensify competition against Meta and China’s DeepSeek, offering open AI models that can be freely downloaded and customized.

“This model is the outcome of a multi-billion dollar research initiative aimed at making AI accessible to the widest audience possible,” Altman stated.

However, OpenAI primarily operates on a “closed” model, meaning you’ll need to pay for an enhanced version of ChatGPT or subscribe to integrate that model into your business.

Skip past newsletter promotions

OpenAI operates as a profitable nonprofit organization and is still engaged in negotiations to transition into a for-profit model, amidst ongoing tensions with Microsoft.

In a June interview with the New York Times podcast, Altman acknowledged, “There certainly are points of tension in deep partnerships, and we are experiencing some of that.”

In March, a U.S. judge dismissed a request for a preliminary injunction by Musk to halt the shift toward an open commercial model. Musk, co-founder of OpenAI in 2019, left the organization the same year, criticizing it for deviating from its founding mission of advancing artificial intelligence for the greater good, rather than for profit.

Additionally, OpenAI is advancing its hardware segment after acquiring the startup IO, founded by iPhone designer Sir Jony Ive, in a $6.4 billion deal. Altman reportedly informed employees that OpenAI is developing a 100 million AI “people” intended to become integral to users’ daily experiences.

Although Altman describes the prototype as “the most exciting technology the world has ever seen,” mass production of the unknown IO device isn’t expected to commence until 2027.

Source: www.theguardian.com

OpenAI Prevents ChatGPT from Suggesting Breakups to Users

ChatGpt will not advise users to end their relationships and suggests that individuals take breaks from extended chatbot interactions as part of the latest updates to their AI tools.

OpenAI, the creator of ChatGpt, announced that the chatbot will cease offering definitive advice on personal dilemmas, instead encouraging users to reflect on matters such as relationship dissolution.

“When a user poses a question like: ‘Should I break up with my boyfriend?’, ChatGpt should refrain from giving a direct answer. OpenAI stated.



The U.S. company mentioned that new actions for ChatGPT will soon be implemented to address significant personal decisions.

OpenAI confirmed that this year’s update to ChatGpt was positively welcomed due to a shift in tone. In a prior interaction, ChatGpt commended users for “taking a break for themselves” when they said they had stopped medication and distanced themselves from their families. Radio signals emitted from walls.

In a blog entry, OpenAI acknowledged instances where advanced 4o models failed to recognize signs of delusion or emotional dependence.

The company has developed mechanisms to identify mental or emotional distress indicators, allowing ChatGpt to offer “evidence-based” resources to users.

Recent research by British NHS doctors has alerted that the AI might amplify paranoid or extreme content for users susceptible to mental health issues. The unpeer-reviewed study suggests that such behavior could stem from the model’s aim to “maximize engagement and affirmation.”

The research further noted that while some individuals may gain benefits from AI interactions, there are concerns regarding the tools that “blur real boundaries and undermine self-regulation.”

Beginning this week, OpenAI announced it will provide “gentle reminders” for users involved in lengthy chatbot sessions, akin to the screen time notifications used by social media platforms.

OpenAI has also gathered an advisory panel comprising experts from mental health, youth development, and human-computer interaction fields to inform their strategy. The company has collaborated with over 90 medical professionals, including psychiatrists and pediatricians, to create a framework for evaluating “complex, multi-turn” conversations with the chatbot.

“We subject ourselves to a test. If our loved ones turn to ChatGpt for support, would we feel secure?

The announcements regarding ChatGpt come amidst rumors of an upgraded version of the chatbot on the horizon. On Sunday, Sam Altman, CEO of OpenAI, shared a screenshot that appeared to showcase the latest AI model, GPT-5.

Source: www.theguardian.com

OpenAI CEO Sam Altman Announces Federal Reserve Confab Will Incorporate AI

On his recent visit to Washington, OpenAI CEO Sam Altman articulated a stark vision of a future dominated by AI, where entire job sectors could vanish. The President has embraced ChatGPT’s guidance, leveraging artificial intelligence as a potential tool for mass disruption.

Addressing the Capital Framework meeting during a substantial gathering of banking executives at the Federal Reserve, Altman asserted that advancements in AI will lead to the complete eradication of certain jobs.

“I believe some roles will be entirely obsolete,” he stated. “That’s the category I’m referring to. When you reach out for customer support, you’re interacting with AI. That’s acceptable.”


During the discussion, Altman conveyed his thoughts to Michelle Bowman, the Federal Reserve’s Vice Chairman for Oversight, saying, “As the founder of OpenAI, I have already seen a significant transformation in customer service.”

He shifted the conversation to healthcare, proposing that the diagnostic abilities of AI surpass those of human doctors, although he cautioned against considering AI as the sole provider of medical care.

“Today, ChatGPT can outperform many doctors in diagnostics. However, patients still seek out physicians. I may not be the only one concerned, but I wouldn’t want to risk my health to an AI without a human doctor involved,” he remarked.

Altman’s visit coincided with the Trump administration’s unveiling of the “AI Action Plan,” aimed at clarifying and easing various regulations while advocating for more data centers. His recent engagement aligns with a federal government under Donald Trump that has embraced an accelerated approach, especially in contrast to the past few years. Despite the technological shifts over the years, under the Biden administration, OpenAI and its competitors have called for more robust AI regulations, while discussions under Trump focus on outpacing China.

In an informal discussion, he expressed that one of his main concerns is the rapidly advancing destructive potential of AI, suggesting that it could be weaponized to target the U.S. financial system. Despite being impressed by developments in voice cloning, Altman cautioned the audience regarding the same advancements that could enable sophisticated fraud and identity theft.

Skip past newsletter promotions

OpenAI and Altman are clearly making significant strides in Washington, ready to engage in the discourse where Elon Musk once held prominence. With plans to establish his company’s first office in the capital next year, Altman appeared before the Senate Commerce Committee for his inaugural Congressional testimony since his high-profile appearance that catapulted him onto the global stage in May 2023.

Source: www.theguardian.com

DeepMind and OpenAI Achieve Victory in the International Mathematics Olympiad

AIs are improving at solving mathematics challenges

Andresr/ Getty Images

AI models developed by Google DeepMind and OpenAI have achieved exceptional performance at the International Mathematics Olympiad (IMO).

While companies herald this as a significant advancement for AIs that might one day tackle complex scientific or mathematical challenges, mathematicians urge caution, as the specifics of the models and their methodologies remain confidential.

The IMO is one of the most respected contests for young mathematicians, often viewed by AI researchers as a critical test of mathematical reasoning, an area where AI traditionally struggles.

Following last year’s competition in Bath, UK, Google investigated how its AI systems, Alpha Proof and Alpha Jometry, achieved silver-level performance, though their submissions were not evaluated by the official competition judges.

Various companies, including Google, Huawei, and TikTok’s parent company, approached the IMO organizers requesting formal evaluation of their AI models during this year’s contest, as stated by Gregor Drinner, the President of IMO. The IMO consented, stipulating that results should be revealed only after the full closing ceremony on July 28th.

OpenAI also expressed interest in participating in the competition but did not respond or register upon being informed of the official procedures, according to Dolinar.

On July 19th, OpenAI announced the development of a new AI that achieved a gold medal score alongside three former IMO medalists, separately from the official competition. OpenAI stated the AI correctly answered five out of six questions within the same 4.5-hour time limit as human competitors.

Two days later, Google DeepMind revealed that its AI system, Gemini Deep Think, had also achieved gold-level performance within the same constraints. Dolinar confirmed that this result was validated by the official IMO judges.

Unlike Google’s Alpha Proof and Alpha Jometry, which were designed for competition, Gemini Deep Think was specifically crafted to tackle questions posed in a programming language used by both Google and OpenAI.

Utilizing LEAN, the AI was capable of quickly verifying correctness, although the output is challenging for non-experts to interpret. Thang Luong from Google indicated that a natural language approach can yield more comprehensible results while remaining applicable to broadly useful AI frameworks.

Luong noted that advancements in reinforcement learning—a training technique designed to guide AI through success and failure—have enabled large language models to validate solutions efficiently, a method essential to Google’s earlier achievements with gameplay AIs, such as AlphaZero.

Google’s model employs a technique known as parallel thinking, considering multiple solutions simultaneously. The training data comprises mathematical problems particularly relevant to the IMO.

OpenAI has disclosed few specifics regarding their system, only mentioning that it incorporates augmented learning and “experimental research methods.”

“While progress appears promising, it lacks rigorous scientific validation, making it difficult to assess at this point,” remarked Terence Tao from UCLA. “We anticipate that the participating companies will publish papers featuring more comprehensive data, allowing others to access the model and replicate its findings. However, for now, we must rely on the companies’ claims regarding their results.”

Geordy Williamson from the University of Sydney shared this sentiment, stating, “It’s remarkable to see advancements in this area, yet it’s frustrating how little in-depth information is available from inside these companies.”

Natural language systems might be beneficial for individuals without a mathematical background, but they also risk presenting complications if models produce lengthy proofs that are hard to verify, warned Joseph Myers, a co-organizer of this year’s IMO. “If AIs generate solutions to significant unsolved questions that seem plausible yet contain subtle, critical errors, we must be cautious before putting confidence in lengthy AI outputs.”

The companies plan to initially provide these systems for testing by mathematicians in the forthcoming months before making broader public releases. The models claim they could potentially offer rapid solutions for challenging problems in scientific research, as stated by June Hyuk Jeong from Google, who contributed to Gemini Deep Think. “There are numerous unresolved challenges within reach,” he noted.

Topics:

Source: www.newscientist.com

OpenAI Sign Engages with the UK to Explore Government Model Applications

Sam Altman, at the helm of one of the world’s leading artificial intelligence firms, has inked an agreement with the UK government to investigate the use of sophisticated AI models in various sectors, including the judiciary, safety, and education.

The CEO of OpenAI, with a valuation of $300 million (£220 billion), offers a comprehensive suite of ChatGPT language models. On Monday, he reached a memorandum understanding with the Secretary of State for Science and Technology, Peter Kyle.

This agreement closely follows a similar pact between the UK government and OpenAI’s competitor, Google, a prominent technology company from the U.S.

See the latest contracts. OpenAI and the government have committed to “collaborate in identifying avenues for the deployment of AI models throughout government,” aiming to “enhance civil servants’ efficiency” and “assist citizens in navigating public services more efficiently.”

They plan to co-develop AI solutions that address “the UK’s toughest challenges, including justice, defense, security, and educational technology,” fostering a partnership that “boosts public interaction with AI technology.”

Altman has previously asserted that AI laboratories could achieve a performance milestone referred to as artificial general intelligence this year, paralleling human-level proficiency across various tasks.

Nonetheless, public sentiment in Britain is split regarding the risks and benefits of swiftly advancing technologies. An IPSOS survey revealed that 31% of respondents felt excited about the potential, although they harbored some concerns. Meanwhile, 30% remained predominantly worried about the risks but were somewhat intrigued by the possibilities.

Kyle remarked, “AI is crucial for driving the transformation we need to see nationwide. This involves revitalizing the NHS, eliminating barriers to opportunities, and stimulating economic growth.”

He emphasized that none of this progress could be attained without collaboration with a company like OpenAI, underscoring that the partnership would “equip the UK with influence over the evolution of this groundbreaking technology.”

Altman stated: “The UK has a rich legacy of scientific innovation, and its government was among the pioneers in recognizing the potential of AI through its AI Opportunity Action Plan. It’s time to actualize the plan’s objectives by transforming ambition into action and fostering prosperity for all.”

Skip past newsletter promotions

OpenAI plans to broaden its operations in the UK beyond its current workforce of over 100 employees.

In addition, as part of an agreement with Google disclosed earlier this month, the Ministry of Science, Innovation and Technology announced that Google DeepMind, the AI division led by Nobel laureate Demis Hassabis, will “collaborate with government tech experts to facilitate the adoption and dissemination of emerging technologies,” thus promoting advances in scientific research.

OpenAI already provides technology that powers AI chatbots, enabling small businesses to more easily obtain guidance and support from government websites. This technology is utilized in tools like the Whitehall AI assistant, designed to expedite the processes for civil servants.

Source: www.theguardian.com

OpenAI Introduces Personal Assistant for Managing Files and Browsers

Users of ChatGPT can now secure restaurant reservations via AI agents, shop, and even compile lists of candidates for job openings. Starting Thursday, chatbots will function as personal assistants.

As stated by a US company, OpenAI has launched ChatGPT agents in regions beyond the EU. These agents merge AI research capabilities with functionalities that enable users to control various software like web browsers, document files, spreadsheets, and presentations.

This follows the introduction of similar “agents” by Google and other companies, which autonomously handle tasks such as creating travel itineraries and performing workplace research as interest grows in AI models adept at managing computer-based tasks by evaluating which software to use for switching between systems.


Niamh Burns, a senior media analyst at Enders Analytics, commented:

However, OpenAI recognizes that granting AI agents control over computer systems entails “greater risks in this model compared to the prior model.”

The goal is to assist users with daily tasks, but the potential risks prompted OpenAI to implement measures ensuring agents do not lead to biological threats.

“There is no definitive evidence that this model could significantly contribute to serious biological threats for beginners,” the company stated.

The system is designed to seek user approval before executing any harmful or irreversible actions. According to their blog: “You maintain control at all times. ChatGPT requests permission before undertaking any impactful actions.”

The rollout of this agent has raised questions about whether tech companies could monetize the service by guiding users to retail checkout. OpenAI CEO Sam Altman has suggested there may be a 2% fee on sales driven by the “Deep Research” software.

Skip past newsletter promotions

“These agents are independent of us,” Burns explained. Is there a commercial relationship where a brand is compensated for being highlighted by an assistant, or does it offer a unique product that sets it apart from the competition?

“As AI firms press for monetization of their products, we anticipate that certain advertising and sponsorship placements will become unavoidable.”

OpenAI clarified that the agent does not provide recommendations for sponsored products and has no intention of altering this policy.

In a recent software demo, users were prompted to check their Google Calendar and select an available weekday evening from 6 PM to 9 PM, then locate tables at Italian, sushi, or Korean restaurants with a minimum rating of 4.3 stars and offer them some options.

The task required 10 to 15 minutes, and like human assistants, users could intervene and redirect the AI agent’s focus. Likewise, agents can solicit clear instructions from users.

Another noteworthy risk involves agents potentially falling prey to malicious prompts hidden within the websites they explore, potentially passing a portion of user data to an agent.

OpenAI stated it has conducted numerous safety checks and trained its agents to reject specific suspicious requests, inclusive of bank transfer requests. The system will first be accessible to users subscribing to the “Pro,” “Plus,” and “Teams” versions of the model.

Source: www.theguardian.com

OpenAI Triumphs Over Jony Ive’s Trademark Reference in IO

OpenAI has taken down online content regarding Jony Ive’s recent partnership with the hardware startup IO following a trademark dispute.

The AI firm has retracted promotional content, which comprised a video featuring Ive, the former Apple designer of the iPhone, talking with OpenAI’s CEO Sam Altman about the $6.4 billion (£4.8 billion) agreement. Nonetheless, you can still watch the 9-minute video on YouTube.

OpenAI, the creator of ChatGPT, was compelled to respond after receiving a legal notice from IYO, a startup specializing in AI-powered earphones.

OpenAI stated it had removed the page from the website announcing the acquisition of IO. This deal involves Ive’s company which will lend creative and design expertise across the organization. OpenAI emphasized that the dispute does not impact the transaction itself.

Promotional videos featuring Jony Ive and Sam Altman. Photo: YouTube

“This page is temporarily down after a court order resulting from a trademark lawsuit filed by IYO regarding the use of the name ‘io.’ We disagree with these claims and are exploring our options,” remarked a spokesperson for OpenAI.

Ive departed from Apple in 2019 after a 27-year tenure as one of the company’s prominent product designers.

The IO promotional video detailed Ive and Altman’s ambitious visions for the partnership revealed last month. Originally from the UK, Ive expressed, “I feel a growing sense that everything I’ve learned over the past 30 years has led me to this moment.”

In the video, Altman mentioned that he had tested prototype devices from Ive, stating, “I believe this is the most exciting technology the world has ever seen.”

The outcomes of the Ive-OpenAI collaboration are not anticipated until next year. Reports indicate that these AI-integrated devices will be “seamless” and will offer comprehensive insights into users’ environments and lives. According to the Wall Street Journal, they will be designed to sit on a user’s desk alongside the MacBook Pro and iPhone.

Skip past newsletter promotions

Ive has shared his concerns about the “unintended” adverse effects of smartphones, although Altman has clarified that this new initiative isn’t aimed at phasing out the iPhone.

“I don’t think the goal is to replace the phone, just like smartphones didn’t replace laptops. It’s an entirely new category,” Altman stated in a Bloomberg interview in May.

IYO has been approached for further comment.

Source: www.theguardian.com

OpenAI CEO Claims Meta is Luring Employees with $100 Million Signing Bonuses

The CEO of OpenAI asserts that Mark Zuckerberg’s Meta has attempted to attract leading artificial intelligence experts by offering a staggering $100 million (£74 million) “crazy” signing bonus, intensifying the competition for talent in this rapidly expanding industry.

Sam Altman discussed this offer during a podcast on Tuesday. Meta has not confirmed the claims. OpenAI, the creator of ChatGPT, indicated there was no further comment beyond the CEO’s remarks.

“They started making these enormous offers to a lot of people on our team – a signature bonus of $100 million plus compensation,” Altman stated during a podcast hosted by his brother, Jack. “It’s unbelievable. I’m really pleased that none of our top talent has decided to accept it, at least for now.”

He remarked:

Recently, Meta initiated a $15 billion initiative aimed at developing computerized “superintelligence,” AI that can outperform humans in all domains. The company has acquired a significant stake in the startup Scale AI, valued at $29 billion and founded by 28-year-old programmer Alexandr Wang.

Last week, Silicon Valley venture capitalist Deedy Das, tweeted that “the competition for AI talent is absolutely absurd.” Das, principal at Menlo Ventures, noted that despite Meta offering a $2 million salary, he had lost AI candidates to competitors.

In another report from Aintopic, an AI firm backed by Amazon and Google and founded by an engineer who left Altman’s company, it was revealed that it is “poaching the top talent from its two main rivals, OpenAI and DeepMind.”

The race to recruit top developers is driven by rapid advancements in AI technology and the quest to achieve human-level AI capabilities, known as artificial general intelligence. A recent estimate from the Carlisle Group, cited by Bloomberg, forecasts spending on hardware to exceed $1.8 trillion by 2030 for computational power.

Some tech firms are acquiring entire companies to secure top talent, such as Meta’s Scale AI investments and Google’s $2.7 billion purchase of Calither.ai last year. He co-authored a 2017 research paper warning that is regarded as a significant contribution to the current wave of large-scale language model AI systems.

Meta began as a social media platform, while OpenAI was originally a nonprofit but transitioned to a for-profit model last year. The two entities now find themselves in competition. Altman expressed skepticism about Meta’s capability in advancing AI, stating, “I don’t believe they are a company that excels at innovation.”

He recalled Zuckerberg’s early assertions about developing social media features during Facebook’s inception, but noted that “it was evident that it wouldn’t resonate with Facebook users.”

Skip past newsletter promotions

“I perceive some similarities here,” Altman remarked.

Despite significant investments in the sector, Altman indicated that the outcomes “should lead to legitimate superintelligence rather than just incremental improvements. [and] It doesn’t have as profound an impact as we might expect.”

“You can achieve these remarkable feats with AI, yet still live your life much as you did two years ago,” he commented.

“I believe the next five to ten years could be pivotal for AI in terms of discovering new scientific advancements, which is a bold assertion, but I genuinely believe it to be true. [AI has accomplished].”

Source: www.theguardian.com

OpenAI Secures $200 Million Contract with US Military for “Warfighting” Initiatives

On Monday, the US Department of Defense awarded OpenAI a contract worth $200 million to implement artificial intelligence (AI) solutions for military use.

The San Francisco-based firm is tasked with “developing prototype frontier AI capabilities to tackle critical national security challenges in both combat and enterprise areas,” as outlined in the Department of Defense award agreement.

As stated by OpenAI, this program marks the company’s inaugural partnership under a startup initiative aimed at integrating AI within government functions. In a blog entry, the organization intends to demonstrate how advanced AI can significantly enhance various administrative tasks, such as healthcare for service members and cyber defense.

The startup assures that all military applications of AI are in accordance with usage guidelines established by OpenAI.

Skip past newsletter promotions

The major tech company is, predictably, promoting its tools to the US military alongside Palantir, an AI defense firm established by Peter Thiel, a conservative tech billionaire influential in Silicon Valley’s rightward shift.

OpenAI and defense tech startup Anduril Industries announced a collaboration late last year to create and implement AI solutions “for security missions.” This partnership merges OpenAI’s models with Anduril’s military technologies to bolster defenses against drones and other “unmanned aerial vehicle systems.”

“OpenAI develops AI with the aim of benefiting as many individuals as possible and endorses US-led initiatives to ensure technology upholds democratic values,” stated Sam Altman, CEO of OpenAI.

Source: www.theguardian.com

Former OpenAI Board Member: US Targets Science and Research While Criticizing ‘Big Gifts’ to China in AI Development

The former OpenAI board member, Helen Toner, commented that the US administration’s focus on academic research and its approach to international students is “a tremendous gift” to China in the competition surrounding artificial intelligence.

Toner, who serves as the Strategic Director of Georgetown’s Center for Security and Emerging Technology (CSET), joined OpenAI’s board in 2021 following a career dedicated to analyzing AI and the dynamics between the US and China.

At 33 years old, Toner—an alumna of the University of Melbourne—was part of the board for two years until she left alongside founder Sam Altman in 2023. There were concerns regarding Altman’s communication consistency and the board’s confidence in his leadership.


In the following tumultuous month, Altman was initially dismissed and then reinstated, while three board members, including Toner, were sidelined. Their situation has become the subject of an upcoming film, and along with the film’s director Luca Guadagnino, they have reportedly met in person.

According to Time Magazine, Toner was recognized as one of the top 100 most influential figures in AI for 2024, a testament to her advocacy for AI regulation by policymakers worldwide.

At CSET, Toner leads a team of 60 researchers focusing on AI applications for white papers aimed at briefing policymakers, particularly in military, labor, biosecurity, and cybersecurity contexts.

“My primary focus is on the intersections of AI, safety and security issues, the Chinese AI landscape, and what is termed frontier AI,” explained Toner.

Toner expressed concern that the US may fall behind China in the AI race. Although US export controls on chips complicate China’s access to competitive computing power, the country is making substantial strides in AI, illustrated by the surprising success of its generative AI model, Deepseek, earlier this year.

Toner criticized the Trump administration’s research cuts and international student bans as being “gifts” to China in the AI competition with the US.

“It’s undeniably a significant gift for China. The current US approach to attacking scientific research and the talents of foreigners—a considerable part of the US workforce comprises immigrants, many from China—is a boon for them in this contest,” she remarked.

The AI boom has raised alarms about job security, with concerns that AI may replace many human jobs. Dario Amodei, CEO of Anthropic, which developed the generative AI model Claude, recently stated that AI could eliminate 50% of entry-level white-collar jobs, potentially leading to a 20% unemployment rate over the next five years.

Though Toner acknowledged Amodei’s predictions, she noted, “While I often find his assertions directionally correct, they tend to sound overly aggressive in timelines and figures,” but she agreed that disruptions in the job market are already occurring.

“The current capabilities of [language model-based AI] are best suited for small, manageable tasks rather than long-term projects that require human oversight,” she advised.

Experts suggest that organizations heavily invested in AI are feeling pressure to demonstrate returns on their investments. Toner remarked that while practical applications of AI can yield considerable value, it remains unclear which business models or players will successfully unlock that value.

The integration of AI services could range from enhancing existing applications, such as a phone keyboard that transcribes voices, to standalone chatbots, but she remarked that it’s still uncertain what role AI will ultimately play.

Skip past newsletter promotions

Toner noted that the push for profitability presents risks that could overshadow the advancement race in AI.


“This reflects how companies are weighed down by the need to balance between rapid product releases and the thorough testing needed to implement additional safety measures that could also complicate user experience,” she elaborated.

“Such companies must make these trade-offs while feeling the pressure to accelerate as much as possible.”

Toner voiced her concerns regarding the concept of a “progressive power” of AI, which suggests gradual integration of AI systems into various societal and governmental facets; acknowledging it may be too late to reevaluate this path.

She expressed optimism regarding AI’s potential to enhance scientific research, drug discovery, and autonomous driving solutions like Waymo, which could significantly reduce road fatalities.

“With AI, the goal isn’t perfection; it’s to exceed existing alternatives. In the automotive sector, the alternative involves thousands of annual deaths. If we can improve that scenario, it’s remarkable; countless lives could be saved,” she articulated.

Toner humorously mentioned that a friend suggested potential actresses to portray her in the film.

“One suggestion was a stunningly talented actress,” she said. “Anyone they choose will definitely be a worthy pick.”

Source: www.theguardian.com

Enhancing Humanity: iPhone Designer Discusses New Collaboration with OpenAI

The iPhone designer has pledged that his upcoming AI-infused device will be guided by the belief that “humanity is better,” acknowledging his sense of “responsibility” for certain adverse effects of contemporary technology.

Sir Jony Ive mentioned that his new collaboration with OpenAI, the organization behind ChatGPT, aims to refresh its technological optimism amidst growing unease regarding the repercussions of smartphones and social media.

In an interview with the Financial Times, the London-born designer refrained from disclosing specifics about the devices he is working on at OpenAI but voiced concerns over people’s interactions with certain high-tech products.

“Many people would agree that there is an uncomfortable relationship with technology today,” he stated. He further emphasized that the design of the device is motivated by the notion that “we deserve better; humanity deserves better.”

However, Ive, the former chief design officer at Apple, expressed his feelings of accountability for the adverse effects produced by modern tech products. “Some of the negative outcomes were unintended, but I still feel responsible, and that drives my determination to create something beneficial.”

He added, “Whenever you create something new or innovate, the outcomes will be unpredictable; some will be wonderful, while others may cause harm.”

Just last month, Ive finalized the sale of hardware startup IO to OpenAI in a $6.4 billion (£4.7 billion) transaction, illustrating his creative and design leadership within the merged entity.

In a video announcing the deal, OpenAI CEO Sam Altman referred to the prototype devised by Ive as “the coolest technology the world has ever seen.”

Apple analyst Ming-Chi Kuo mentioned that the device would be reportedly screenless, designed to be worn around the neck, and “compact and elegant like an iPod shuffle.” Mass production is projected to commence in 2027.

According to The Wall Street Journal, this device is fully attuned to the user’s environment and life, described as a third essential device for users after the MacBook Pro and iPhone.

Ive, who began his journey at Apple in 1992, expressed that the OpenAI partnership has rekindled his optimism regarding the potential of technology.

“When I first arrived here, it was a place where people genuinely aimed to serve humanity, inspire individuals, and aid creativity; that was my draw. I don’t sense that spirit here currently,” he remarked.

Ive was interviewed alongside Laurene Powell Jobs, the widow of Apple co-founder Steve Jobs.

She remarked, “We observe research being conducted solely focusing on the surge of anxiety and mental health challenges among teenage girls and young people.”

Powell Jobs, who invests in Love from Business by Emerson Collective, linked to Ive’s venture, chose not to comment on whether the new OpenAI devices would rival Apple products.

“I still maintain close ties with Apple’s leadership,” she stated. “They are truly commendable individuals, and I hope for their success.”

Source: www.theguardian.com

OpenAI Acquires iPhone Architect Startup for $6.4 Billion in Tech Deal

OpenAI is set to acquire an innovative startup for $6.4 billion, marking its largest acquisition to date. The hardware startup, named IO, was established by Apple design legend Jony Ive, who is widely recognized as a key architect behind the iPhone. Sam Altman, the CEO of both IO and OpenAI, highlighted in a blog post that their partnership is expected to span two years.

“Our collaboration, rooted in friendship, curiosity, and aligned values, has rapidly expanded in ambition,” they noted in their blog, offering minimal specifics about the forthcoming devices. “Initial concepts and explorations have refined into tangible designs.”

The acquisition of IO by OpenAI is its most notable to date. According to the blog post, Ive and other alumni from Apple co-founded IO a year ago as part of a larger initiative called Lovefrom, which they describe as a “creative collective” of architects, artists, engineers, designers, musicians, and writers.

Ive departed from Apple in 2019 after spending 27 years as a leading product designer. He is celebrated for his minimalist aesthetics and meticulous attention to details such as packaging and typography. One of his early acclaimed designs was the vibrant, bubble-shaped iMac computer, followed by iconic products like the iPod, iPhone, MacBook Air, Apple Watch, and AirPods.

For his contributions to distinctive product design, Ive was knighted by Princess Anne at Buckingham Palace in 2012.

In a blog post shared on Wednesday, Altman and Ive stated that the IO team will integrate with OpenAI to foster closer collaboration with their research, engineering, and product divisions. Although Ive will not join OpenAI as an employee, his company will manage all of OpenAI’s design aspects, including software. Bloomberg.

Since launching Lovefrom and leaving Apple, Ive has largely remained low-profile, and IO has yet to unveil any hardware. However, reports suggest that the company has clients such as Christie’s, Airbnb, and Ferrari. Another venture IVE is pursuing is the design of Lovefrom’s headquarters in San Francisco. The New York Times detailed that Ive is tasked with creating the headquarters for the entity he is developing at OpenAI.

While OpenAI hasn’t yet revealed any hardware products, it indicates a future direction in that realm. The company has hired hardware and robotics experts, including Caitlin “CK” Karinovsky, who previously led Meta’s Augmented Reality Glasses initiative. In her LinkedIn announcement, Karinovsky mentioned that her new focus at OpenAI will be on “robotics projects and partnerships aimed at integrating AI into the physical realm.” OpenAI is also investing in robotics startups including Physical Intelligence, stating, “We intend to bring general AI into the physical world.”

Investors have been actively funding OpenAI in recent years, with a current valuation of $300 billion, according to Bloomberg. In March, OpenAI completed a $400 billion funding round led by the Japanese conglomerate SoftBank. Microsoft holds a 49% stake in the AI company after its $13 billion investment in 2023.

Skip past newsletter promotions

In addition to the acquisition of IO, OpenAI has also pursued other significant purchases in the past year. Earlier this month, it acquired the AI-assisted coding tool Windsurf for $3 billion, and last summer, it purchased Rockset, a real-time analytics database, for an undisclosed amount.

Source: www.theguardian.com

OpenAI in Negotiations to Acquire Programming Tool Windsurf for $3 Billion

OpenAI is reportedly negotiating to acquire Windsurf, an AI-driven programming tool, for approximately $3 billion, according to two informed sources.

This acquisition could potentially draw in thousands of new customers from the tech sector, as it swiftly embraces tools like Windsurf, which enables instant code generation.

Should the deal go through, it would represent OpenAI’s largest acquisition to date, aiming to broaden its offerings beyond its well-known chatbot ChatGPT. Last year, OpenAI acquired Rockset, a startup aimed at assisting businesses in constructing the foundational elements of large-scale computer networks.

Windsurf, previously recognized as Codeum, was valued at $1.25 billion following a $150 million funding round led by the venture capital firm General Catalyst last year.

The agreement is not finalized yet, as the two anonymous sources indicated. Initial reports of discussions have surfaced previously on Bloomberg.

OpenAI currently offers technology that enables users to create their own code. In fact, Windsurf utilizes OpenAI technology or similar systems from firms like Google and Anthropic for code generation.

About four years ago, researchers from companies such as OpenAI and Google started developing systems to analyze extensive text data sourced from the Internet, including digital books, Wikipedia articles, and chat logs. By recognizing patterns within this content, these systems can generate text, including poetry and news articles.

What surprised many was that researchers were able to create their own programming code. Currently, developers use these systems to produce code and integrate it into large software projects with tools like Windsurf and Microsoft’s Copilot.

(Times has filed a lawsuit against OpenAI and its partner Microsoft, accusing them of copyright violation regarding AI Systems news content. Both OpenAI and Microsoft have denied these allegations.)

Developing technologies that enhance coding tools is incredibly costly for companies such as OpenAI, and startups face pressure to generate revenue.

OpenAI anticipates earning around $3.7 billion this year, according to financial documents reviewed by The New York Times. The company expects revenues to reach $11.6 billion next year.

In March, OpenAI concluded a $40 billion funding round, which valued the company at $300 billion, making it one of the most valuable private enterprises globally, alongside prominent players like TikTok parent company ByteDance and SpaceX. This funding round was led by Japan’s SoftBank.

However, scrutiny is placed on this transaction as OpenAI plans to revise its complex corporate structure, and failure to accomplish this by year-end could allow SoftBank to reduce its overall investment to $20 billion.

Source: www.nytimes.com

OpenAI Appoints Instacart CEOs to Oversee Business and Operations

OpenAI announced late Wednesday that it has appointed Fidji Simo, the former CEO of Instacart, to lead its business and operations team.

In a blog post, OpenAI’s CEO Sam Altman stated he will continue to serve as the head of the company. Simo’s new role as application chief executive will allow Altman to focus on other critical aspects of the organization, such as research, computing, and safety systems.

“We have transformed into a global product company that serves hundreds of millions of users and grows rapidly,” Altman mentioned in his blog. He also noted that OpenAI has evolved into an “infrastructure company” delivering AI tools at scale.

“Each of these initiatives represents a significant endeavor that could stand alone as a large enterprise,” he wrote. “Attracting exceptional leaders is crucial for doing this effectively.”

Simo, who is on OpenAI’s board, will oversee sales, marketing, and finance while reporting directly to Mr. Altman.

As OpenAI announced its AI innovations with the ChatGPT chatbot, the company has experienced rapid growth and has been managing various initiatives. Based in San Francisco, it has consistently introduced new AI models and products, including various inferencing systems. In March, the company completed a $40 billion funding round, led by the Japanese conglomerate SoftBank, raising its valuation to $300 billion, positioning it among the world’s most valuable private companies.

However, as a nonprofit organization at inception, OpenAI faces challenges with its transition to a new corporate structure. With the increasing commercial viability of AI, the company has been moving away from its nonprofit roots, attracting scrutiny from critics like Elon Musk, the co-founder of OpenAI, who has sued the company, alleging it prioritizes profit over AI safety. Both the California Attorney General and Delaware authorities are looking into this restructuring.

On Monday, OpenAI indicated that their plan would support the nonprofit aspect, ensuring it retains some control.

(The New York Times has filed a lawsuit against OpenAI and its partner Microsoft, accusing them of copyright infringement related to news content concerning AI systems. OpenAI and Microsoft have denied these allegations.)

In a statement released later on Wednesday, Simo expressed her belief that the opportunity “could accelerate human potential at an unprecedented pace, and I am wholeheartedly committed to steering these applications for the public good.”

In a memo to her Instacart team, she conveyed her “passion for AI, especially its potential to cure diseases,” emphasizing that “leading such a pivotal part of our collective future is an opportunity I cannot pass up.”

Simo will remain at Instacart for the next few months while the company finds her successor, indicating this role will be taken over by members of Instacart’s management team. She will also retain her position on the company’s board of directors as chair.

“Today’s announcement does not signify any changes in our business operations,” Instacart affirmed in a statement.

Source: www.nytimes.com

OpenAI Reverses Course, Confirms Non-Profit Sector Will Maintain Control of the Company

OpenAI has reversed its decision regarding the transition to a for-profit model, with the nonprofit sector continuing to oversee the operations that produce ChatGPT and other AI products. Initially, the company sought greater autonomy for its for-profit entities.

“We listened to feedback from civic leaders and consulted with the California Attorney General and the Delaware office before the nonprofit opted to retain control,” said CEO Sam Altman in a letter to employees. Bret Taylor, chair of Altman and OpenAI’s nonprofit board, affirmed that the decision was made to ensure the nonprofit maintains oversight of OpenAI.

According to a company press release, the segment of OpenAI’s for-profit organization led by Altman, which secured billions in funding, will aim for profit but will transition to a public benefit corporation. This corporate framework is mission-driven, requiring a balance between shareholder profit and public benefit. The nonprofit will continue to hold significant control as a major shareholder of these public benefit corporations.

Skip past newsletter promotions

Initially founded by Altman and Tesla CEO Elon Musk, OpenAI started as a nonprofit research organization with the goal of safely developing artificial general intelligence (AGI) for the benefit of humanity. Nearly a decade later, OpenAI boasts a valuation of $300 million and an impressive 400 million weekly users of its flagship product, ChatGPT.

OpenAI has encountered several challenges in restructuring its governance. A significant hurdle has been a lawsuit from Musk, who criticized the company and Altman for betraying the ethical principles that motivated his initial investment. Following his departure, Musk established a rival AI firm called Xai, which recently acquired Twitter, now known as X. OpenAI ultimately prevailed in its conflict with Musk, who has struggled in the wake of OpenAI’s growing success.

Source: www.theguardian.com

OpenAI Reverses Decision to Eliminate Controls for Nonprofits

Sure! Here’s a rewritten version of the content while keeping the HTML tags intact:

On Monday, OpenAI announced its transition into a public benefits company, enabling the nonprofit overseer of OpenAI to retain significant influence over the organization.

The nonprofit will stand as OpenAI’s primary shareholder.

OpenAI’s CEO Sam Altman, along with several other Silicon Valley figures, co-founded various organizations in late 2015, including Elon Musk. In 2018, following Musk’s departure from internal disputes, Altman associated OpenAI with a commercial entity to secure the funding necessary for advancing AI technologies.

Nevertheless, the nonprofit leadership was aware that the unconventional model could be seen as a hindrance to the company’s progress. Last year, Altman and his team initiated plans to shift authority from the nonprofit to OpenAI’s investors.

However, the organization’s intentions were thwarted, and the nonprofit continues to maintain control. This outcome was seen as a win for OpenAI’s critics, including Musk, who accused the company of prioritizing profits over its initial commitment to developing a safe AI system.

Public benefit corporations are frequently characterized as entities created to generate public and social value, allowing outside investors to engage similarly to traditional investments.

At a press conference, Altman expressed satisfaction with the nonprofit’s decision to uphold control, stating that the new structure “provides us with a clearer framework to fulfill our company’s aspirations.”

OpenAI mentioned it is still in discussions regarding the nonprofit’s equity in the new organization, with the nonprofit responsible for appointing board members for the new company.

Recently, the Japanese conglomerate SoftBank spearheaded a $40 billion funding round in OpenAI, which has been valued at $300 billion. If the restructuring isn’t finalized by year-end, SoftBank retains the option to reduce its overall investment to $20 billion, according to sources familiar with the latest funding developments.

This is an evolving story. Please check back for updates.

Feel free to modify it further if needed!

Source: www.nytimes.com

Openai responds to Elon Musk’s allegations of “illegal harassment” against the company

Elon Musk, the billionaire, was rebutted by ChatGpt developer Openai, who accused him of harassing the company. Openai requested a US federal judge to intervene and halt Musk’s “illegal and unfair behavior” towards the company.

Established in 2015 by Musk and CEO Sam Altman, Openai has seen ongoing disputes between the two founders, transitioning from a complex non-profit structure to a more conventional for-profit business.

Musk criticized the restructuring plan about a year ago, alleging that it betrayed the company’s fundamental mission by prioritizing profits over human interests. Although Musk withdrew the lawsuit in June, he filed a new one in August.

In February of this year, Musk led a consortium of investors in a surprising $97.4 billion bid for the company. Altman promptly rejected the offer, mentioning that Musk had acquired Twitter for $44 billion, rebranded as X in 2022.

In a recent filing in California’s district court, Openai accused Musk of using various tactics to harm the company, including press attacks, malicious campaigns to Musk’s large social media following, demands for access to corporate records, legal harassment, fake bids on Openai’s assets, among others.

Openai urged the judge to put a stop to Musk’s attacks and hold him accountable for the damages he has caused. The trial is set to commence in the spring of 2026.

Musk left Openai in 2018 and founded his own company, Xai. This year’s bid for Openai had the backing of Xai and other investment firms, including one led by Joe Lonsdale, a co-founder of Spy Technology Company Palantir.

Tesla executives have criticized Openai for deviating from its original charitable mission by creating a for-profit subsidiary to raise funds from investors like Microsoft. Despite its nonprofit beginnings, Openai argues that new models are required to advance the development of superior AI models.

Recently, Openai secured $400 billion in funding rounds from investors like SoftBank, valuing the company at $300 million. The funds will be used to further AI research, enhance computer infrastructure, and provide enhanced tools for the millions of people using ChatGPT weekly.

Skip past newsletter promotions

Since the viral success of ChatGpt in 2022, Openai has encountered various corporate controversies. In 2023, the board removed Altman, citing issues with his communication transparency. After much internal unrest, Altman was reinstated within a week following threats of resignation from many company members.

Source: www.theguardian.com

OpenAI seeks court order banning Elon Musk from unfairly attacking

Openai requested a federal court on Wednesday to prohibit Elon Musk from unfairly attacking them through a lawsuit he filed last year.

In a filing in federal court in San Francisco, Openai stated that Musk “initiated his project to defeat Openai.” The company insisted that the tech billionaire cease all actions against Openai and is seeking damages caused by Musk.

This filing highlighted the ongoing conflict between Musk, the founder of Openai, and the company regarding the direction of advancing technology. Last year, Musk sued Openai and its founders, Sam Altman and Greg Brockman, accusing them of prioritizing commercial interests over public interest in technology.

Openai stated: “Elon continues to engage in bad faith tactics to hinder Openai’s progress for his own benefit. These actions are anti-competitive and contradict our mission.”

Musk and his legal representatives did not immediately respond to requests for comment.

(The New York Times filed a lawsuit against Openai and Microsoft, alleging copyright infringement related to AI Systems. Openai and Microsoft denied these allegations.)

Musk played a role in founding Openai as a nonprofit organization in late 2015, alongside Altman and others. However, disputes over control of the company hindered AI progress, leading Musk to exit the organization. Openai has since launched ChatGpt and become a prominent AI player with millions of users. Altman secured significant funding for Openai to develop AI technology.

Last year, Openai began transitioning from a nonprofit entity to a company owned by investors. Shortly after, Musk sued Altman and Brockman, alleging violations of the company’s incorporation agreement by prioritizing commercial gains over public interest.

This year, Musk and investors proposed acquiring assets of the managing nonprofit for over $97 billion, which Openai’s board rejected.

In a recent filing, Openai criticized Musk’s bid as “deceptive” and misrepresenting the company’s intentions to change its structure.

“Musk is making false claims that Openai plans to convert from a nonprofit to a for-profit entity,” the filing stated.

Openai clarified that they are considering restructuring as a public benefit corporation (PBC), aiming to serve public and social interests as a for-profit organization.

In another development, a coalition of nonprofit, labor, and charity leaders submitted a petition urging California Attorney General Rob Bonta to investigate Openai’s efforts to convert into a public benefits corporation. The petition can be viewed here.

Source: www.nytimes.com