Legal action taken by newspaper in New York City joins Copyright lawsuit against US author, Openai, and Microsoft

In New York, 12 US copyright lawsuits against Openai and Microsoft have been consolidated, with authors and news outlets suing the companies for centralization.

According to a Transfer order from the U.S. Judicial Commission on Multi-District Litigation, centralization can help coordinate findings, streamline pretrial litigation, and eliminate inconsistent rulings.

Prominent authors like Ta-Nehisi Coates, Michael Chabon, Junot Díaz, and comedian Sarah Silverman brought the incident to California, but it will now be moved to New York to join news outlets such as The New York Times. Other authors like John Grisham, George Sounders, Jonathan Franzen, and Jody Picoll are also involved in the lawsuits.

Although most plaintiffs opposed the merger, the transfer order addresses factual questions related to allegations that Openai and Microsoft used copyrighted works without consent to train large-scale language models (LLM) for AI products like Openai’s ChatGPT and Microsoft’s copylot.

Openai initially proposed consolidating the cases in Northern California, but the Judiciary Committee moved them to the Southern District of New York for the convenience of parties and witnesses and to ensure a fair and efficient conduct of the case.

High-tech companies argue that using copyrighted works to train AI falls under the doctrine of “fair use,” but many plaintiffs, including authors and news outlets, believe otherwise.

An Openai spokesperson welcomed the development, stating that they train on publicly available data to support innovation. On the other hand, a lawyer representing Daily News looks forward to proving in court that Microsoft and Openai have infringed on their copyrights.

Some of the authors suing Openai have also filed suits against meta for copyright infringement in AI model training. Court filings in January revealed allegations against Meta CEO Mark Zuckerberg for approving the use of copyrighted materials in AI training.

Amazon recently announced a new Kindle feature called “Recaps” that uses AI to generate summaries of books for readers. While the company sees it as a convenience for readers, some users have raised concerns about the accuracy of AI-generated summaries.

The UK government is addressing peer and labor concerns about copyright proposals, and companies are being urged to assess the economic impact of their AI plans.

This article was revised on April 4, 2025. Previous versions incorrectly identified Steven Lieberman as part of Daily News.

Source: www.theguardian.com

Openai secures record-breaking $400 billion contract with SoftBank

Openai announced a $400 billion funding round that valued ChatGpt makers at $300 million. Partnering with SoftBank, Openai aims to push the boundaries of AI research towards AGI (artificial general information) with significant computing power.

SoftBank believes in achieving “artificial super intelligence” (ASI) surpassing human intelligence, praising Openai as the best partner to reach this goal. SoftBank plans to invest $10 billion initially and $300 billion by 2025, subject to meeting certain conditions.

Facing competition from Deepseek and Meta in the open source AI space, Openai announced plans to develop a more open, generative AI model. Additionally, Openai is expanding its user base rapidly with the latest image generation features in ChatGpt.


Openai, led by CEO Sam Altman, previously favored a closed model for AI development. However, with evolving priorities, Openai is now embracing open source to allow developers more flexibility in adapting AI technologies.

Critics of closed AI models, like Google, argue that open models pose higher risks and are more susceptible to misuse. Former Openai investor Elon Musk urges Openai to prioritize open source safety.

Companies and governments prefer AI models they can control for data security reasons. Meta and Deepseek offer customizable models, enabling users to download and modify them to suit their needs.

Commenting on the success of new features in ChatGpt, Altman mentioned a surge in users overwhelming Openai’s resources. This advancement underscores the growing interest and demand for AI advancements.

Agence France-Presse

Source: www.theguardian.com

OpenAI Shuts Down $300 Billion Corporation

Openai announced on Monday that it had finalized a $40 billion funding agreement, doubling the valuation of the company from six months ago.

Led by SoftBank, the new funding round valued Openai at $300 billion and positioned it as one of the most valuable private companies alongside Rocket Company SpaceX and bytedance, the parent company of Tiktok.

The investment round follows the launch of the AI chatbot ChatGpt in late 2022, demonstrating the continued excitement in the high-tech industry for AI advancements.

Openai CEO Sam Altman expressed that the investment will drive innovations and make AI more beneficial in everyday life.

Openai also revealed that 500 million people are actively using ChatGpt weekly, with 20 million paying for the advanced version of the chatbot.

According to sources, the $40 billion investment will be split into two parts, with Softbank Group contributing 75% of the total amount.

Altman founded Openai as a nonprofit in 2015 with Elon Musk, transitioning it to a commercial enterprise in 2018 to attract the necessary funding for AI development.

Plans are in motion to shift the management of the company to a for-profit entity known as public benefit companies.

Musk filed a lawsuit against Openai and Altman, accusing them of prioritizing commercial interests over public good.

Openai aims to transition to public benefit companies by the end of the year, or risk a reduction in SoftBank’s contribution.

A bid from Musk and investors to acquire assets from Openai was rejected by the board of directors.

Altman’s efforts to separate the company from the nonprofit may face challenges due to the ongoing legal issues.

(Openai and Microsoft are facing a lawsuit alleging copyright infringement related to AI news content, which they have denied.)

Source: www.nytimes.com

OpenAI introduces a new image generator feature for ChatGPT

Chatbots were originally designed to chat. But they can generate images too.

On Tuesday, Openai strengthened its ChatGpt chatbot with new technology designed to generate images from detailed, complex and unusual instructions.

For example, explaining a four-panel comic strip that includes the characters that appear on each panel and what they are saying to each other, technology can instantly generate elaborate comics.

Previous versions of ChatGPT can generate images, but by blending these broad concepts, it was not possible to create images reliably.

The new version of CHATGPT illustrates a broader change in artificial intelligence technology. After starting as a mere text-generating system, chatbots have transformed into a tool that combines chat with a variety of other abilities.

The technology also supports a new version of CHATGPT called GPT 4-O, allowing chatbots to receive and respond to voice commands, images and videos. You can even talk.

Released at the end of 2022, the original ChatGpt learned its skills by analyzing a huge amount of texts from across the internet. I learned to answer questions, write poetry and generate computer code.

Could not generate image. But about a year later, Openai released a new version of ChatGPT, which can generate images called Dall-E. However, ChatGpt and Dall-E were separate systems.

Now, Openai is building a single system that learns a wide range of skills from both text and images. When generating your own images, the system can pull out everything ChatGpt has learned from the Internet.

“This is a whole new kind of technology under the hood,” said Gabriel Goh, a researcher at Openai. “We don’t disband image generation and text generation. We hope that everything will be done together.”

Traditionally, AI image generators have had a hard time creating images that are significantly different from existing images. For example, if I asked the image generator to create an image of a bike with a triangular wheel, that was a pain.

Goh said the new ChatGPT could handle this type of request.

Images of “triangle vehicle” made using OpenAI’s new ChatGPT image generator.

Openai said starting Tuesday, this new version of ChatGPT will be available to people using both the free and paid versions of the chatbot. This includes both ChatGpt Plus, a $20-month service, and ChatGpt Pro, a $200 service that provides access to all the company’s latest tools.

(New York Times sued Openai and its partner Microsoft in December for copyright infringement of news content related to AI systems.)

Source: www.nytimes.com

UK Regulator Abandons Review of Microsoft’s Partnership with OpenAI

The UK Competition Watchdog has decided not to conduct a formal investigation into the partnership with the startups behind Microsoft’s AI chatbot, ChatGPT. The tech company, valued at 2.9TN (£2.3TN), claims it has a “material impact” on OpenAI but does not exercise control over it.

While the Competitive Markets Agency (CMA) acknowledged Microsoft’s significant financial support of OpenAI with a $13 billion investment, it concluded that Microsoft’s influence did not reach the threshold for an official investigation due to lack of control.

The CMA’s decision comes amidst concerns over the appointment of former Amazon UK boss Duggar as interim chairman. The CMA’s chief executive, Sarah Cardell, emphasized the need to maintain business trust without creating undue regulatory pressure from the UK government.


Joel Bamford, executive director of CMA’s merger, stated that as there was no change in control, the current partnership structure did not warrant review under UK’s merger regulations.

However, Bamford clarified that this decision does not imply that the partnership has been cleared of competitive concerns.

Following Sam Altman’s appointment as OpenAI’s CEO, the CMA initiated an investigation into OpenAI’s relations, noting a decrease in its reliance on Microsoft for computing power as a factor influencing their decision.

A Microsoft representative emphasized that the partnership with OpenAI supports competition, innovation, and responsible AI development. The decision to end the investigation was made after careful consideration of commercial realities.

Last year, the CMA chose not to investigate Amazon’s investment in AI companies, and similarly did not delve deeper into Microsoft’s partnerships with Mistral and Decleft.

Skip past newsletter promotions

Microsoft recently invested $6.6 billion in OpenAI, contributing to a funding round that valued the company at $15.7 billion. OpenAI, run by a non-profit committee, has subsidiaries of for-profit entities, with Microsoft being the major supporter of these subsidiaries.

Despite concerns over Gurr’s appointment and the avoidance of negative economic impact, the CMA has focused on scrutinizing Big Tech, particularly during Gurr’s tenure. Alongside investigations into Google’s internet search dominance, the CMA is also exploring the effects of Apple and Google’s mobile platforms on consumers and businesses.

In January, Microsoft criticized the CMA’s cloud market survey, claiming it impedes tech companies from effectively competing with Google and Amazon in cloud computing services.

Source: www.theguardian.com

OpenAI introduces SORA video generation tool in UK amidst copyright dispute | Artificial Intelligence (AI)

Openai, the artificial intelligence company behind ChatGPT, has introduced video generation tools in the UK, highlighting the growing connection between the tech sector and the creative industry in relation to copyright.

Film director Beevan Kidron spoke out about the release of Sora in the UK, noting its impact on the ongoing copyright debate.

Openai, based in San Francisco, has made SORA accessible to UK users who are subscribed to ChatGPT. The tool surprised filmmakers upon its release last year. A halt in studio expansion was triggered by concerns from TV mogul Tyler Perry, who believed the tool could replace physical sets or locations. It was initially launched in the US in December.

Users can utilize SORA to generate videos by inputting simple prompts like requesting scenes of people walking through “beautiful snowy Tokyo City.”

Openai has now introduced SORA in the UK, with reported cases of artists using the tool in the UK and mainland Europe, where it was also released on Friday. One user, Josephine Miller, a 25-year-old British digital artist, created a video using SORA featuring a model adorned in bioluminescent fauna, praising the tool for opening up opportunities for young creatives.

'Biolume': Josephine Miller uses Openai's Sora to create stunning footage – Video

Despite the launch of SORA, Kidron emphasized the significance of the ongoing UK copyright and AI discussions, particularly in light of government proposals permitting AI companies to train their models using copyrighted content.

Kidron raised concerns about the ethical use of copyrighted material to train SORA, pointing out potential violations of terms and conditions if unauthorized content is used. She stressed the importance of upholding copyright laws in the development of AI technologies.

Recent statements from YouTube indicated that using copyrighted material without proper licensing for training AI models like SORA could lead to legal repercussions. The concern remains about the origin and legality of the datasets used to train these AI tools.

The Guardian reported that policymakers are exploring options for offering copyright concessions to certain creative sectors, further highlighting the complex interplay between AI, technology, and copyright laws.

Skip past newsletter promotions

Sora allows users to craft videos ranging from 5 to 20 seconds, with an option to create longer videos. Users can choose from various aesthetic styles like “film noir” and “balloon world” for their clips.

www.theguardian.com

OpenAI rejects $97.4 billion bid from Musk, asserts company is not for sale

The recent opening rejected a $97.4 billion bid by a consortium led by billionaire Elon Musk for ChatGpt makers, stating that the startup is not up for sale.

This unsolicited offer is Musk’s latest attempt to thwart a startup co-founded with CEO Sam Altman.

“Openai is not for sale. The board unanimously turned down this latest attempt to disrupt Musk’s competition. Openai emphasized that their mission is to ensure that AGI benefits humanity and mentioned the possibility of a reorganization as a nonprofit organization.”

Altman confirmed in an interview with Axios that Openai is not for sale, and he responded to Musk’s offer with a simple “no thanks,” prompting Musk to call him a “swindler.”

A consortium, including Musk-led AI startup Xai, stated that they would withdraw their bid for Openai’s nonprofit status if plans to become a for-profit organization were removed, as per a court application filed on Wednesday.

Two days ago, the consortium introduced new terms in the proposal through a court filing. The filing exposed that the client’s “published ‘bids’ were not actual bids at all.” The Openai board communicated their position to Musk’s lawyer on Friday.

Other investors in the consortium include Valor Equity Partners, Baron Capital, and Hollywood Power Broker Ari Emanuel.

Altman and Musk have been in conflict for several years.

After Musk’s departure in 2019, Openai established a for-profit division that attracted significant fundraising, leading Musk to claim that the startup was deviating from its original mission and focusing more on profits than public good.

Musk filed a lawsuit against Altman, Openai, and their major supporter Microsoft in August last year on grounds of breach of contract.

In November, Musk requested a preliminary injunction from a US district judge to prevent the transition to a for-profit structure.

Source: www.theguardian.com

Elon Musk announces potential $97 billion bid on OpenAI if it remains a nonprofit.

Elon Musk has stated that he will retract a $97 billion offer to purchase the nonprofit organization behind Openai if the makers of ChatGpt agree to abandon plans to convert them into for-profit entities.

“If the board of Openai, Inc is willing to uphold its charitable mission and ensure that any “sales” are conducted without conversions, Musk will withdraw his bid,” he stated on Wednesday. “If not, the nonprofit must be compensated based on the amount paid by the prospective buyer for the assets.”

Earlier this week, Musk and a group of investors made their offer, adding a new twist to the ongoing controversy surrounding the artificial intelligence company he co-founded a decade ago.


Openai is currently operated by a nonprofit board dedicated to its original mission of developing AI “safer and more advanced than humans” for the public good. However, as the business grows, it has announced plans to change its corporate structure formally.

Musk, along with his AI startup Xai and a group of investment firms, seeks control over Openai by transforming the nonprofit into a for-profit subsidiary.

Openai CEO Sam Altman swiftly dismissed the unsolicited offers in a social media post, reiterating at AI’s Paris Summit that the company is not for sale. Openai’s board chairman, Bret Taylor, echoed these sentiments at the event on Wednesday.

Musk and Altman were instrumental in launching Openai in 2015, but had disagreements over leadership, leading to Musk stepping down from the board in 2018 only to rejoin in 2024.

During a video call at the World Government Summit in Dubai, Musk criticized Altman once again, comparing Openai to turning the Amazon rainforest into a timber company. Altman countered that Musk’s legal challenges were influenced by his competing startups.

Skip past newsletter promotions

Musk is currently seeking a California federal judge’s intervention to prevent Openai’s commercial conversions, alleging breach of contract and antitrust violations. While the judge has shown doubt about some of Musk’s arguments, no ruling has been issued yet.

Source: www.theguardian.com

Elon Musk’s Potential Ownership of OpenAI Could Have Negative Consequences, Despite Possibility of it Occurring

eLon Musk and Sam Altman are not exactly best friends. Altman’s pursuit of a for-profit approach for Openai, a company founded in 2015, seems to have irked Musk. Altman’s focus on making money rather than advancing humanity’s interests clashed with Musk’s vision for Openai.

As a result, Musk, who previously attempted to acquire Twitter, has now acquired ownership of an entity called X, which is linked to Openai’s growth.

Musk, characterized by the US government as lean, efficient, and globally influential, made a substantial bid of nearly $100 million for Openai’s nonprofit sector. Musk emphasized the need for Openai to return to its original open-source and safety-focused model. However, this bid was rejected by Altman, who jokingly mentioned that he would buy Twitter for $97.4 billion if necessary.

Musk’s bid was not about enriching investors or inflating corporate valuations, but about steering AI development towards societal benefits. Although the bid to reclaim control of Openai’s nonprofit was significant, the outcome remains uncertain.

The ongoing feud between Musk and Altman may escalate further, especially considering the history of their disagreements. Musk’s bid to take over Openai’s nonprofit could be seen as an attempt to thwart Altman’s for-profit ambitions for the company.

Elon Musk and Donald Trump, Washington, January 19, 2025. Photo: Brian Snyder/Reuters

Musk’s bid for Openai’s nonprofit could have multiple interpretations, ranging from a strategic move to a mere publicity stunt. Given Musk’s penchant for unconventional actions, the true motives behind his bid remain uncertain.

There are various theories regarding the significance of the bid, including references to literature and playful numbers. However, the bid’s seriousness cannot be discounted, especially in light of potential political implications.

The bid may also reflect Musk’s attempt to disrupt the status quo and reshape the future trajectory of AI development. The possibility of Musk and Openai merging in the future cannot be ruled out entirely, given the unpredictable nature of the current situation.

Source: www.theguardian.com

Elon Musk to lead group in unexpected $100 billion bid for OpenAI

Elon Musk stirred up a dispute between Openai and its CEO Sam Altman on Monday. The billionaire heads a group of investors that revealed they had put forth a $97.4 billion bid for “all assets” of the artificial intelligence company to Openai’s board of directors.

The startup behind ChatGpt is in the process of transitioning from its original non-commercial status. Openai also operates a for-profit subsidiary, and Musk’s unsolicited offer could complicate the company’s plans. Wall Street Journal first reported the proposed bid.

“If Sam Altman and the current Openai, Inc. board of directors are intending to fully focus on profit, it is crucial that the charity is adequately compensated for what its leadership is taking away from it. It’s about time,” stated Mark Toberov, a lawyer representing investors.

Altman quickly responded to Musk shortly after the news broke, stating, “Thank you, but I’ll buy Twitter for $9.74 billion if necessary.” Musk acquired Twitter for $44 billion in 2022 and rebranded it. Musk’s reply to the post was “Swindler.”

Musk co-founded Openai but left the company in 2019 to start his own AI company called Xai. There have been ongoing disagreements between him and Altman over the company’s direction. He sued Openai over its restructuring plan, dropped the lawsuit, and then reignited the conflict.

The bid is backed by Xai and several investment firms, including those managed by Joe Lonsdale, who co-founded Stealth Government contractor Palantir. Ari Emanuel, CEO of entertainment company Endeavor, also joined the group through his investment fund.

“At X.ai, we adhere to the values that Openai has committed to uphold. Grok has fostered open source. We respect the rights of content creators,” Musk stated. “It’s time for Openai to return to its roots of open-source, safety-focused power. We will ensure that happens.”

Toberoff informed the Wall Street Journal that Musk’s consortium of investors is prepared to match or exceed the value of any other potential bids.

Openai argues that the restructuring is crucial for the company’s sustainability and access to capital. They claim that sticking with the non-profit structure alone will not keep up with the highly competitive world of AI innovation. Openai anticipates the restructuring to be completed by 2026.

Skip past newsletter promotions

Musk is a close associate of Donald Trump, whereas Altman met with the president and attended the inauguration. Trump has identified Openai as part of a group of AI companies collaborating on a $500 million deal named Stargate to invest in cutting-edge technology. Musk’s Xai is not included in this agreement.

Source: www.theguardian.com

OpenAI set to launch “Deep-Search” tool designed to cater to research analysts

Openai has advanced artificial intelligence development by introducing a new tool that claims to produce reports comparable to those of research analysts.

Developers of Chatgpt have dubbed the tool “Deep Research,” stating that it can accomplish tasks that would take humans hours in just 10 minutes.

This announcement comes shortly after a San Francisco-based company accelerated its product release in response to the progress made by Openai’s competitor, Deepseek.

“Deep Research” is an AI agent that allows users to delegate tasks and is powered by Openai’s latest cutting-edge model, O3 version.

Openai explained that deep research scours hundreds of online sources, analyzes, integrates, creates comprehensive reports, and sifts through massive amounts of text, images, and PDFs.

The company views tools like the Chatgpt button as essential steps towards achieving artificial general intelligence, a concept that aims to match or exceed human intelligence in various tasks.

Last month, Openai unveiled an AI agent named Operator, claiming it can manage an online store based on photos of shopping lists, albeit only in the US preview version.

In a demonstration video released on Sunday, Openai showcased Deep Research analyzing the translation app market, stating that each task takes 5-30 minutes to complete with proper sourcing.

Openai highlighted that deep research targets experts in fields like finance, science, and engineering but can also be utilized for car and furniture purchases.

Leveraging Openai’s latest “reasoning” model, O3, deep research processes queries slower than traditional models and has a partially disclosed entity named O3-mini, a slimmed-down version of O3.

The full capabilities of the O3 model were outlined in the recent international AI safety report, prompting concerns from experts like Yoshua Bengio about the potential risks posed by AI advancements.

Skip past newsletter promotions

Deep surveys are accessible to Openai’s protia users in the US for $200 (£162 per month), with a monthly limit on queries due to processing constraints. Not available in the UK and Europe.

Andrew Rogoyski, director of an AI Research Institute affiliated with Sally University, cautioned about the potential dangers of blindly relying on deep search tools without conducting thorough verifications of their outputs.

“Knowledge-intensive AI faces a fundamental challenge. Human validation and verification are crucial to ensure the accuracy of machine analysis,” said Rogoyski.

Source: www.theguardian.com

OpenAI set to unveil new AI model for free in technology industry

Openai has released a new artificial intelligence model for free after stating that it will accelerate its product release in response to the emergence of Chinese competitors.

The company behind Chatgpt has introduced an AI called O3-MINI following the unexpected success of a rival product by DeepSeek in China. Users of Openai’s free chatbot version face some restrictions but can use it for free.

Deepseek has caused a stir among US high-tech investors with the release of an inference model that supports the company’s chatbot. The news that it bypassed Apple’s free App Store and claimed to have been developed at minimal cost caused a $1 trillion drop in the Tech Heavy Nasdaq index on Monday.

Openai’s CEO Sam Altman responded to Deepseek’s challenge by promising to provide a superior model and speeding up the product release. He announced the upcoming release of O3-Mini, a more powerful version of the full O3 model, on January 23.

“Today’s launch marks the introduction of a reasoning function for free users, a crucial step in expanding AI accessibility for practical applications,” Openai stated.

R1, the technology behind Deepseek’s chatbot, not only matches Openai’s performance but also requires fewer resources. Investors questioned whether US companies would maintain control of the AI market despite billion-dollar investments in AI infrastructure and products.

OPENAI mentioned that the O3-mini model is on par with O1 in terms of mathematics, coding, and science but is more cost-effective and faster. The $200 PRO package provides unlimited access to O3-mini, while lower-tier users have more usage than free users.

Skip past newsletter promotions

The capabilities of the full O3 model were highlighted in the international AI safety report released on Tuesday. The study’s lead, Yoshua Bengio, emphasized that its potential impact on AI risk could be significant. He noted that O3’s performance in major abstract tests marked a surprising breakthrough, outperforming many human experts in some cases.

Source: www.theguardian.com

Invest in Talks with SoftBank for OpenAI deal

SoftBank, Japan’s Investment Group, is in talks to invest up to $25 billion (£2 billion) in Openai, making it the largest financial backer of the startup behind ChatGPT.

According to the Financial Times, the potential investment could range from $15 billion to $25 billion in the San Francisco-based company.

Other investors, including TikTok’s parent company, Bytedance, and British chip designer Arm, have already supported Openai and recently participated in a fundraising round that valued the company at $157 billion. Microsoft, currently the largest shareholder of Openai, also joined the round.

Last week, Openai and SoftBank announced the formation of Stargate in collaboration with Oracle, which Donald Trump called “the largest AI infrastructure project in history.” The partnership aims to build AI system data centers with an initial investment of $100 billion.

Multiple sources familiar with the matter quoted by FT said that SoftBank’s potential investment includes a commitment from a Japanese company to Stargate. Elon Musk, the wealthiest person in the world and a prominent figure in the Trump administration, has claimed that Stargate’s supporters may not actually have the funds.

Sam Altman, the CEO of Openai, refuted Musk’s claims on his social media platform X, stating, “This is a great opportunity for the company. I understand that it may not always align with your company’s interests, but in your new role, I hope you will consider it.”

Openai faced competition this month from Chinese rival Deepseek, whose latest chatbot topped the Apple Free App Store charts and impacted AI-related stocks on Monday.

Altman initially acknowledged the competition from Deepseek, stating that “having new competitors keeps things lively,” but later claimed that the Chinese company may be using Openai technology to develop competing products.

The proposal for SoftBank’s investment in Openai, led by CEO Masayoshi Son, is reportedly under review by senior executives and the board of Openai. However, it has not been confirmed.

Skip past newsletter promotions

Both Openai and SoftBank have declined to comment on the matter.

Source: www.theguardian.com

Chinese Companies Warn OpenAI About Distillation of US AI Models Leading to Rivalry

Openai has issued a warning that Chinese emerging companies are developing competing products using DeepSeek technology and the AI model from Chatgpt manufacturer.

Investing $13 billion in SAN Francisco-based AI developers, Openai and their partner Microsoft are now looking into whether their proprietary technology was illegally obtained through a process known as distillation.

The latest chatbot from DeepSeek has caused quite a stir in the market, surpassing free app store rankings in Aping and causing a $1 drop in the market value of US tech stocks related to AI. This impact stems from claims that the AI model behind DeepSeek was trained at a fraction of the cost and hardware used by competitors like Openai and Google.

Openai’s CEO, Sam Altman, initially praised DeepSeek, calling it a “legally active new competitor.”

However, Openai later revealed evidence of “distillation” by a Chinese company, using advanced models to achieve similar results in a specific task by distilling the performance of a smaller model. Openai’s statement did not explicitly mention DeepSeek.

An Openai spokesperson stated, “We are aware that Chinese companies and others are continuously attempting to distill models from major US AI companies. As a leading AI developer, we are taking IP protection measures. Our released models undergo a meticulous process that includes cutting-edge features.”

Openai has faced allegations of training its own models with data unauthorized by publishers or creative industries, and has been actively working to prevent distillation of its models.

The Openai spokesperson emphasized the importance of collaboration with the US government to safeguard their most advanced models from the efforts of enemies and competitors to replicate US technology.

Donald Trump’s recent statement highlighted the impact of DeepSeek within Silicon Valley. Photo: Lionel Bonaventure/AFP/Getty Images

Source: www.theguardian.com

Trump Reveals $500 Billion Partnership in Artificial Intelligence with OpenAI, Oracle, and SoftBank

Donald Trump has initiated what he refers to as “the largest AI infrastructure project in history,” a $500 billion collaboration involving OpenAI, Oracle, and SoftBank, with the goal of establishing a network of data centers throughout the United States.

The newly formed partnership, named Stargate, will construct the necessary data centers and computing infrastructure to propel the advancement of artificial intelligence. Trump stated that over 100,000 individuals will be deployed “almost immediately” as part of this initiative, emphasizing the objective of creating jobs in America.

This announcement signifies one of Trump’s initial significant business moves since his return to office, as the U.S. seeks new strategies to maintain its AI superiority over China. The announcement was made during an event attended by Mr. Ellison, Softbank’s Masayoshi Son, Open AI’s Sam Altman, and other prominent figures.

President Trump expressed his intention to leverage the state of emergency to promote project development, particularly in the realm of energy infrastructure.

“We need to build this,” declared President Trump. “They require substantial power generation, and we are streamlining the process for them to undertake this production within their own facilities.”

This initiative comes on the heels of President Trump reversing the policies of his predecessor, President Joe Biden. A 100-page executive order signals a significant shift in U.S. AI policy regarding safety standards and content watermarking.

While the investment is substantial, it aligns with broader market projections – financial firm Blackstone has already predicted $1 trillion in U.S. data center investments over a five-year period.

President Trump portrayed the announcement as a vote of confidence in his administration, noting that its timing coincided with his return to power. He stated, “This monumental endeavor serves as a strong statement of belief in America’s potential under new leadership.”

The establishment of Stargate follows a prior announcement by President Trump regarding a $20 billion AI data center investment by UAE-based DAMAC Properties. While locations for the new data centers in the U.S. are under consideration, the project will commence with an initial site in Texas.

Source: www.theguardian.com

Sexual Abuse Allegations Against OpenAI CEO Sam Altman Made by Sister Lead to Lawsuit

The sister of OpenAI CEO Sam Altman has filed a lawsuit alleging that he sexually abused her on a regular basis over several years as a child.

The lawsuit, filed Jan. 6 in the U.S. District Court for the Eastern District of Missouri, alleges the abuse began when Ann Altman was 3 years old and Sam Altman was 12. The complaint alleges that the last abuse occurred after he was an adult, but his sister, known as Annie, was still a child.

The CEO of ChatGPT Developers posted: Joint statement on X”, he signed alongside his mother Connie and brothers Max and Jack, denying the allegations and calling them “totally false.”‘

“Our family loves Annie and is extremely concerned about her health,” the statement said. “Caring for family members facing mental health challenges is incredibly difficult.”

It added: “Annie has made deeply hurtful and completely untrue allegations about our family, especially Sam. This situation has caused immeasurable pain to our entire family.”

Ann Altman previously made similar allegations against her brother on social media platforms.

In a court filing, her lawyer said she had experienced mental health issues as a result of the alleged abuse. The lawsuit seeks a jury trial and more than $75,000 (£60,000) in damages and legal fees.

A statement from the family said Anne Altman had made “deeply hurtful and completely false allegations” about the family and accused them of demanding more money.

He added that they offered her “monthly financial assistance” and “attempted to receive medical assistance,” but she “refused conventional treatment.”

The family said they had previously decided not to publicly respond to the allegations, but chose to do so following her decision to take legal action.

Sam Altman, 39, is one of the most prominent leaders in technology and the co-founder of OpenAI, best known for ChatGPT, an artificial intelligence (AI) chatbot launched in 2022.

The billionaire temporarily stepped down as chief executive in November 2023 after being ousted from the company’s board for “failing to consistently communicate openly.” Although nearly all employees threatened to resign, he returned to his job the following week. Altman returned to the board last March following an external investigation.

Source: www.theguardian.com

OpenAI to Shift to For-Profit Company Structure, Announces Transition Plans

OpenAI has announced plans to reorganize its corporate structure in the coming year, noting that it will establish a public benefit corporation to oversee its expanding operations and alleviate constraints imposed by its current nonprofit parent company.

Speculations are circulating about OpenAI’s transition to a commercial entity. Details of the proposal have now been revealed for the first time.

According to the proposed framework, a for-profit public interest corporation will manage OpenAI’s business activities, while a nonprofit entity will oversee the organization’s philanthropic endeavors in fields like healthcare, education, and science.

This new structure grants greater authority to OpenAI’s commercial division. The company stated in a blog post that it aims to create a “more robust nonprofit entity supported by the accomplishments of a for-profit entity.” OpenAI also mentioned that this setup will enable them to “secure the necessary funding” comparable to other companies in the industry.

Initially established as a nonprofit research-focused organization in 2015, OpenAI is the creator of the popular ChatGPT chatbot and is considered one of the most valuable startups globally.

In pursuit of artificial general intelligence (AGI), a form of AI surpassing human intellect, OpenAI has been exploring structural modifications over the past year to attract additional investment. The success of the latest $6.6 billion funding round (valuing the company at $157 billion) hinged on restructuring and eliminating profit restrictions for investors.

“Investors are willing to back us, but at this scale of capital, we no longer require traditional funding with extensive structural constraints,” stated OpenAI in a blog post.

Microsoft holds the largest stake in OpenAI at 49%, a situation that could become intricate if OpenAI transitions into a commercial entity. Investment banks have been engaged to facilitate the process and determine Microsoft’s future ownership stake in the reorganized OpenAI. As reported by the Wall Street Journal.

OpenAI’s competitors in the generative AI sector, including Anthropic and Elon Musk’s xAI, have adopted a similar public benefit corporation model. OpenAI believes that adopting this structure can enhance its competitiveness in the market.

“The substantial investment being made by leading companies in AI development underscores the level of commitment needed for OpenAI to advance its mission,” mentioned OpenAI in a blog post. “We once again find ourselves in need of raising more funds than we had anticipated.”

Source: www.theguardian.com

Former OpenAI employee who blew the whistle dies, was set to testify for the company

Suthir Balaji, a former OpenAI engineer and whistleblower, revealed that he played a role in training the artificial intelligence system powering ChatGPT. He later expressed concerns that these actions breached copyright laws. His passing was announced by his parents and San Francisco officials, stating that he was 26 years old.

Working at OpenAI for almost four years until his retirement in August, Balaji was highly esteemed by his colleagues. Co-founders described him as one of the strongest contributors to OpenAI, crucial for the development of its products.

OpenAI released a statement expressing their devastation upon learning of Balaji’s death, extending sympathy to his loved ones during this challenging time.

Balaji was discovered deceased in his San Francisco residence on November 26, with authorities suspecting suicide. Initial investigations found no evidence of foul play, as confirmed by the city’s Chief Medical Examiner’s Office.

His parents, Poornima Rama Rao and Balaji Ramamurthy, continued seeking answers, remembering their son as a happy, intelligent, and courageous individual who enjoyed hiking and had recently returned from a trip with friends.

Born and raised in the San Francisco Bay Area, Balaji studied computer science at the University of California, Berkeley. Joining OpenAI initially for a summer internship in 2018, he later returned to create WebGPT, a project instrumental in the development of ChatGPT.

Remembered for his essential contributions to OpenAI projects, Balaji’s meticulous nature and problem-solving skills were praised by co-founder John Schulman. Balaji’s involvement in training GPT-4 opened discussions about copyright concerns within the AI research field.

Balaji’s stance on copyright infringement, detailed in interviews with media outlets, raised eyebrows within the AI community. Despite mixed reactions, he remained steadfast in his beliefs about the ethical implications of using data without proper authorization.

His decision to leave OpenAI was influenced by internal conflicts and his desire to explore alternative methods for building artificial general intelligence. Memorial services are scheduled later this month at the India Community Center in Milpitas, California.

In the US, contact the National Suicide Prevention Lifeline at 988 or visit 988lifeline.org for crisis support. In the UK and Ireland, reach out to Samaritans at 116 123 or via email. Australian crisis support services can be reached at 13 11 14. International helplines are available at befrienders.org

The Associated Press and OpenAI have a licensing agreement granting OpenAI access to certain AP text archives.

Source: www.theguardian.com

OpenAI claims that the cause of “David Mayer” is not a glitch, according to ChatGPT.

Over the past weekend, the internet was buzzing with the name of David Mayer, sparking intrigue and speculation online.

David Mayer gained temporary fame on social media when ChatGPT, a popular chatbot, seemed reluctant to acknowledge his name.

Despite numerous attempts from chatbot enthusiasts, ChatGPT consistently failed to produce the words “David Mayer” in its responses. This led to theories that Mayer himself may have requested the omission of his name from ChatGPT’s output.

OpenAI, the developer behind ChatGPT, clarified that the issue was a software glitch. An OpenAI spokesperson mentioned, “One of our tools mistakenly flagged the name, preventing it from appearing in responses. We are working on a fix.”

While some speculated that David Mayer de Rothschild could be involved, he denied any connection to the incident, dismissing it as a conspiracy theory surrounding his family’s name.

The glitch was not related to the late Professor David Mayer, who was mistakenly linked to a Chechen militant. It is speculated that the glitch may have been influenced by the GDPR privacy regulations in the UK and EU.

OpenAI has since resolved the “David Mayer” issue, but other names mentioned on social media still trigger error responses on ChatGPT.

Helena Brown, a data protection expert, highlighted the implications of the “right to be forgotten” in AI tools. While removing a name may be feasible, erasing all traces of an individual’s data could pose challenges due to the extensive data collection and complexity of AI models.

Given the vast amount of personal data used to train AI models, achieving complete data erasure for individual privacy may prove challenging, as data is sourced from various public platforms.

Source: www.theguardian.com

OpenAI Enters into a Multi-Year Content Partnership with Condé Nast | Technology Sector

Condé Nast and OpenAI have announced a long-term partnership to feature content from Condé Nast’s brands such as Vogue, Wired, and The New Yorker in OpenAI’s ChatGPT and SearchGPT prototypes.

The financial details of the agreement were not disclosed. OpenAI, backed by Microsoft and led by Sam Altman, has recently signed similar deals with Axel Springer, Time magazine’s owner, Financial Times, Business Insider, Le Monde in France, and Prisa Media in Spain. This partnership allows OpenAI to access extensive text archives owned by publishers for training large language models like ChatGPT and real-time information retrieval.

OpenAI launched SearchGPT, an AI-powered search engine in July, venturing into Google’s long-dominant territory. Collaborations with magazine publishers enable SearchGPT to display information and references from Condé Nast articles in search results.


OpenAI’s Chief Operating Officer, Brad Lightcap, expressed the company’s dedication to collaborating with Condé Nast and other news publishers to uphold accuracy, integrity, and respect for quality journalism as AI becomes more assimilated in news discovery and dissemination.

Condé Nast CEO Roger Lynch mentioned in an email reported by The New York Times that this partnership will help offset some revenue losses suffered by publishers due to technology companies. He emphasized the importance of meeting readers’ needs while ensuring proper attribution and compensation for the use of intellectual property with emerging technologies.

On the contrary, some media companies like The New York Times and The Intercept have taken legal action against OpenAI for using their articles without permission, indicating an ongoing legal dispute.

Source: www.theguardian.com

OpenAI claims Iranian group utilized ChatGPT in attempt to sway US elections

OpenAI announced on Friday that it had taken down the accounts of an Iranian group using its chatbot, ChatGPT, to create content with the aim of influencing the U.S. presidential election and other important issues.

Dubbed “Storm-2035,” the attack involved the use of ChatGPT to generate content related to various topics, including discussions on the U.S. presidential election, the Gaza conflict, and Israel’s involvement in the Olympics. This content was then shared on social media platforms and websites.

A Microsoft-backed AI company investigation revealed that ChatGPT was being utilized to produce lengthy articles and short comments for social media.


OpenAI noted that this strategy did not result in significant engagement from the audience, as most of the social media posts had minimal likes, shares, or comments. There was also no evidence of the web articles being shared on social media platforms.

These accounts have been banned from using OpenAI’s services, and the company stated that it will continue to monitor them for any policy violations.

In an early August report by Microsoft threat intelligence, it was revealed that an Iranian network called Storm 2035, operating through four websites posing as news outlets, was actively interacting with U.S. voters across the political spectrum.

The network’s activities focused on generating divisive messages on topics like U.S. presidential candidates, LGBTQ rights, and the Israel-Hamas conflict.

As the November 5th presidential election approaches, the battle between Democratic candidate Kamala Harris and Republican opponent Donald Trump intensifies.

OpenAI previously disrupted five covert influence operations in May that attempted to use their models for deceptive online activities.

Source: www.theguardian.com

OpenAI launches SearchGPT, a new search engine, in the midst of AI competition | Business

OpenAI is currently testing a new search engine that utilizes generative artificial intelligence to generate search results, potentially posing a challenge to Google’s dominance in the online search market. The company announced that SearchGPT will initially launch with limited users and publishers before expanding further. OpenAI plans to integrate search capabilities into ChatGPT and offer it as a standalone product in the future.

SearchGPT is described as a preliminary prototype that combines AI models (such as ChatGPT) with internet search abilities to provide search results in a conversational format with real-time information and relevant source links. This feature positions OpenAI as a direct competitor to major search engines like Google and Bing, owned by Microsoft, OpenAI’s largest investor.

Integrating generative AI into search engines has become a trend among technology companies, despite concerns about accuracy and copyright issues. OpenAI aims to make searching on the web quicker and easier by enhancing its models with real-time information from the internet.

There is a potential risk of backlash from publishers over how OpenAI uses their content in SearchGPT. Some news outlets have already filed lawsuits against the company for alleged copyright infringement, claiming that their published work was used without permission.

OpenAI denies these claims, stating that their use of copyrighted data falls under the “fair use” doctrine. Other companies have faced similar backlash from users and publishers for AI-generated search features.

OpenAI is collaborating with publishers to give them control over how their content appears in search results and promoting trusted sources of information. The company’s press release includes statements from industry leaders, endorsing AI-powered search as the future of the internet.

This development comes at a time when Google is facing an antitrust lawsuit alleging illegal monopolization of the internet search industry. The lawsuit claims that Google signed deals with major companies to make it the default browser on their devices, further solidifying its dominance.

Source: www.theguardian.com

Microsoft withdraws its observer status from OpenAI board in response to regulatory scrutiny.

Amid regulator scrutiny over big tech companies’ relationships with artificial intelligence startups, Microsoft is stepping down from its observer role on OpenAI’s board, and Apple will no longer appoint executives to similar positions.

Microsoft, the primary funder of ChatGPT developer, announced its resignation in a letter to the startup, as reported by the Financial Times. The company stated that the resignation, as a mere observer with no voting rights on board decisions, is effective immediately.

Microsoft highlighted the progress made by the new OpenAI board post the eventful departure and reinstatement of CEO Sam Altman last year. The company mentioned that OpenAI is heading in the right direction by emphasizing safety and nurturing a positive work culture.

“Considering these developments, we feel that our limited observer role is no longer essential,” stated Microsoft, which has invested $13 billion (£10.2 billion) in OpenAI.

However, Microsoft reportedly believed that its observer role raised concerns among competition regulators. The UK’s Competition and Markets Authority is reviewing whether the deal equated to an “acquisition of control,” while the US Federal Trade Commission is also investigating View Partnerships.

While the European Commission opted out of a formal merger review regarding Microsoft’s investment in OpenAI, it is examining exclusivity clauses in the contract between the two entities.

An OpenAI spokesperson mentioned that the startup is adopting a new strategy to engage key partners like Microsoft, Apple, and other investors on a regular basis to strengthen alignment on safety and security.

As part of this new approach, OpenAI will no longer have an observer on the board, meaning Apple will also not have a similar role. Reports had surfaced earlier this month about Apple intending to include App Store head Phil Schiller on its board, but no comment has been received from Apple.

Regulatory scrutiny has intensified on investments in AI startups. The FTC is investigating OpenAI and Microsoft, along with Anthropic, the creator of the Claude chatbot, and their collaborations with tech giants Google and Amazon. In the UK, the CMA is looking into Amazon’s partnership with Anthropic, as well as Microsoft’s ties with Mistral and Inflection AI.

Skip Newsletter Promotions

Alex Hafner, a partner at British law firm Fladgate, indicated that Microsoft’s decision seemed to be impacted by the regulatory landscape.

“It’s evident that regulators are closely monitoring the intricate relationships between big tech firms and AI providers, prompting Microsoft and others to rethink how they structure these arrangements in the future,” he commented.

Source: www.theguardian.com

China: OpenAI Blocks Access, Prompting Panic Among Chinese Developers

At the World AI Conference held in Shanghai last week, SenseTime, one of China’s leading artificial intelligence companies, revealed its newest model, the SenseNova 5.5. The model showcased its ability to recognize and describe a stuffed puppy (sporting a SenseTime cap), offer input on a drawing of a rabbit, and swiftly scan and summarize a page of text. SenseTime boasts that SenseNova 5.5 competes with GPT-4o, the flagship artificial intelligence model from Microsoft-backed US company OpenAI.

To entice users, SenseTime is offering 50 million tokens, digital credits for AI usage, at no cost. Additionally, the company states that it will have staff available to assist new customers in transitioning from OpenAI’s services to SenseTime’s products for free. This move aims to attract Chinese developers previously aligned with OpenAI, as the company had notified Chinese users of an impending blockage of its tools and services from July 9.

The sudden decision by OpenAI to block API traffic from regions without OpenAI service access has created an opportunity for domestic Chinese AI companies like SenseTime to onboard rejected users. Amid escalating tensions between the US and China over export restrictions on advanced semiconductors essential for training cutting-edge AI technologies, Chinese AI companies are now in a fierce competition to absorb former OpenAI users. Baidu, Zhipu AI, and Tencent Cloud, among others, have also offered free tokens and migration services to entice users.

The withdrawal of OpenAI from China has accelerated the development of Chinese AI companies, who are determined to catch up to their US counterparts. While Chinese AI companies focus on commercializing large-scale language models, the departure of OpenAI presents an opportunity for these companies to innovate and enhance their models.

Despite setbacks, Chinese commentators have downplayed the impact of OpenAI’s decision, depicting it as pressure from the US to impede China’s technological progress. There are indications that US restrictions on China’s AI industry are taking effect, with companies like Kuaishou facing limitations due to a chip shortage induced by sanctions. This adversity has fueled a growing market for American-made semiconductors while inspiring creativity to counter American software blockages.

Source: www.theguardian.com

Elon Musk unexpectedly withdraws legal action against Sam Altman and OpenAI

Elon Musk has submitted a motion to dismiss a lawsuit against ChatGPT developer OpenAI and its CEO Sam Altman, claiming that the startup has deviated from its original goal of developing artificial intelligence for the betterment of humanity.

Musk filed the lawsuit against Altman in February, and the legal process has been progressing slowly in a California court. Up until Tuesday, Musk had not shown any intention of dropping the case. Just a month ago, his legal team filed an objection, leading to the presiding judge stepping down.


Musk’s motion to dismiss the lawsuit did not provide any rationale. A San Francisco Superior Court judge was set to consider arguments from Altman and OpenAI on Wednesday to have the lawsuit thrown out.

The dismissal brought an abrupt end to the legal dispute between two influential figures in the tech realm. Musk and Altman co-founded OpenAI in 2015, but Musk resigned from the board three years later following disagreements over the company’s governance and direction. Their relationship has become increasingly strained as Altman’s prominence has grown in recent years.

Musk’s lawsuit centered on his assertion that Altman and OpenAI breached the company’s “foundation agreement” by collaborating with Microsoft, transforming OpenAI into a predominantly profit-driven entity, and withholding its technology from the public.

OpenAI and Altman contested the existence of such an agreement, citing messages that appeared to show Musk supporting the shift towards a for-profit model. They vehemently denied any wrongdoing and published a blog post in March suggesting Musk’s motivations were rooted in jealousy, expressing regret that a respected figure had taken this course of action.

Musk’s lawsuit raised eyebrows among legal experts, who pointed out that certain claims, such as OpenAI achieving artificial intelligence equivalent to human intelligence, lacked credibility.

Source: www.theguardian.com

Antitrust Investigation Launched Against Microsoft, OpenAI, and NVIDIA in Technology Sector

Microsoft, OpenAI and Nvidia are under increased scrutiny for their involvement in the artificial intelligence industry as U.S. regulators have reportedly agreed to investigate these companies.

The New York Times reported that the US Department of Justice and the Federal Trade Commission (FTC) have reached an agreement to investigate key players in the AI market, with the investigation expected to be completed within the next few days.

The Justice Department will lead an investigation into whether Nvidia, a leading chip maker for AI systems, has violated antitrust laws aimed at promoting fair competition and preventing monopolies, according to Wednesday’s NYT.

Meanwhile, the FTC will scrutinize OpenAI, the developer of the ChatGPT chatbot, and Microsoft, the largest investor in OpenAI and supporter of other AI companies.

The Wall Street Journal also reported on Thursday that the FTC is investigating whether Microsoft structured a recent deal with startup Inflection AI in a way to avoid antitrust scrutiny.

In March, Microsoft hired Mustafa Suleiman, CEO and co-founder of Inflexion, to lead its new AI division and agreed to pay the company $650 million to license its AI software.

The FTC has shown interest in the AI market before, ordering OpenAI, Microsoft, Google parent Alphabet, Amazon, and Anthropic to provide information on recent investments and partnerships involving generative AI companies and cloud service providers.

An investigation into OpenAI was launched last year based on allegations of consumer protection law violations related to personal data and reputations being at risk.

Jonathan Cantor, head of the Justice Department’s antitrust division, stated that the department will “urgently” investigate the AI sector to examine monopoly issues and the competitive landscape in technology.

Skip Newsletter Promotions

Regulators like Kantor believe swift action is necessary to prevent tech giants from dominating the AI market.

The FTC, Department of Justice, Nvidia, OpenAI, and Microsoft have been approached for comments.

Source: www.theguardian.com

AI Industry Faces Risks, Employees from OpenAI and Google DeepMind Sound Alarm

A group of current and former employees from prominent artificial intelligence companies has published an open letter. The committee warned of inadequate safety oversight within the industry and called for better protection for whistleblowers.

The letter, advocating for a “right to warn about artificial intelligence,” is a rare public statement about the risks of AI from employees in a usually secretive industry. It was signed by 11 current and former employees of OpenAI and two current and former Google DeepMind employees, one of whom previously worked at Anthropic.

“AI companies have valuable non-public information about their systems’ capabilities, limitations, safeguards, and risk of harm. However, they have minimal obligations to share this information with governments and none with the public. We cannot rely on companies to share this information voluntarily,” the letter stated.

OpenAI defended its practices, stating that they have hotlines and mechanisms for issue reporting, and they do not release new technology without proper safeguards. Google did not respond immediately to a comment request.

Concerns about the potential dangers of artificial intelligence have been around for years, but the recent AI boom has heightened these concerns, leading regulators to struggle to keep up with technological advancements. While AI companies claim to be developing their technology safely, researchers and employees warn about a lack of oversight to prevent AI tools from exacerbating existing societal harms or creating new ones.

The letter also mentions a bill seeking to enhance protections for AI company employees who raise safety concerns. The bill calls for transparency and accountability principles, including not forcing employees to sign agreements that prevent them from discussing risk-related AI issues publicly.

In a recent report, it was revealed that companies like OpenAI have tactics to discourage employees from freely discussing their work, with consequences for those who speak out. OpenAI CEO Sam Altman apologized for these practices and promised changes to exit procedures.

The open letter echoes concerns raised by former top OpenAI employees about the company’s lack of transparency in its operations. It comes after recent resignations of key OpenAI employees over disagreements about the company’s safety culture.

Source: www.theguardian.com

OpenAI warns against releasing voice cloning tools due to safety concerns.

OpenAI’s latest tool can create an accurate replica of someone’s voice with just 15 seconds of recorded audio. This technology is being used by AI Labs to address the threat of misinformation during a critical global election year. However, due to the risks involved, it is not being released to the public in an effort to limit potential harm.

Voice Engine was initially developed in 2022 and was initially integrated into ChatGPT for text-to-speech functionality. Despite its capabilities, OpenAI has refrained from publicizing it extensively, taking a cautious approach towards its broader release.

Through discussions and testing, OpenAI aims to make informed decisions about the responsible use of synthetic speech technology. Selected partners have access to incorporate the technology into their applications and products after careful consideration.

Various partners, like Age of Learning and HeyGen, are utilizing the technology for educational and storytelling purposes. It enables the creation of translated content while maintaining the original speaker’s accent and voice characteristics.

OpenAI showcased a study where the technology helped a person regain their lost voice due to a medical condition. Despite its potential, OpenAI is previewing the technology rather than widely releasing it to help society adapt to the challenges of advanced generative models.

OpenAI emphasizes the importance of protecting individual voices in AI applications and educating the public about the capabilities and limitations of AI technologies. The voice engine is watermarked to enable tracking of generated voices, with agreements in place to ensure consent from original speakers.

While OpenAI’s tools are known for their simplicity and efficiency in voice replication, competitors like Eleven Labs offer similar capabilities to the public. To address potential misuse, precautions are being taken to detect and prevent the creation of voice clones impersonating political figures in key elections.

Source: www.theguardian.com

Elon Musk’s Lawsuit Criticized by OpenAI as “Frivolous” and “Disjointed” in Legal Filings

OpenAI criticized Elon Musk’s lawsuit against the company in a legal response filed on Monday, calling the Tesla CEO’s claims “frivolous” and driven by “advancing commercial interests.”

The filing is a rebuttal to Musk’s lawsuit against OpenAI earlier this month, accusing the company of reneging on its commitment to benefiting humanity. OpenAI refuted many of the key allegations in Musk’s lawsuit, denying the existence of what he referred to as an “establishment agreement.”

The filing highlighted the complexity and lack of factual basis for Musk’s claims, pointing out the absence of any actual agreement mentioned in the pleadings.


The conflict between OpenAI and Musk has been escalating since Musk’s lawsuit, intensifying the ongoing disagreement between Musk and OpenAI CEO Sam Altman. Although they co-founded the nonprofit in 2015, disputes over company direction and control led to Musk’s departure three years later. The relationship between Musk and Altman has soured as OpenAI gained recognition for products like ChatGPT and DALL-E.

Musk’s lawsuit accuses OpenAI of straying from its original mission as a nonprofit organization focused on sharing technology for humanity’s benefit, alleging that Altman received significant investments from Microsoft. OpenAI denied these claims in a recent blog post, stating that Musk supported the shift to a for-profit entity but wanted sole control.

OpenAI’s response painted Musk as envious and resentful of the company since starting his own commercial AI venture. The filing dismissed the notion of a founding agreement between Musk and Altman, labeling it as a “fiction” created by Musk.

According to the response, Musk’s motivation for suing OpenAI is to bolster his competitive position in the industry, rather than genuine concerns for human progress.

Skip past newsletter promotions

The filing concluded that Musk’s actions stem from a desire to replicate OpenAI’s technological achievements for his own benefit.

Source: www.theguardian.com

Kenan Malik argues that Elon Musk and OpenAI are fostering existential dread to evade regulation

IIn 1914, on the eve of World War I, H.G. Wells published a novel about the possibility of an even bigger conflagration. liberated world Thirty years before the Manhattan Project, “humankind'' [to] Carry around enough potential energy in your handbag to destroy half a city. ” A global war breaks out, precipitating a nuclear apocalypse. To achieve peace, it is necessary to establish a world government.

Wells was concerned not just with the dangers of new technology, but also with the dangers of democracy. Wells's world government was not created by democratic will, but was imposed as a benign dictatorship. “The ruled will show their consent by silence,” King Ecbert of England says menacingly. For Wells, “common man” means “Violent idiots in social issues and public affairs”. Only an educated, scientifically-minded elite can “save democracy from itself.”

A century later, another technology inspires similar awe and fear: artificial intelligence. From Silicon Valley boardrooms to the backrooms of Davos, political leaders, technology moguls, and academics are exulting in the immense benefits of AI, but they are also concerned about its potential. ing. announce the end of humanity When super-intelligent machines come to rule the world. And, as a century ago, questions of democracy and social control are at the heart of the debate.

In 2015, journalist Stephen Levy Interview with Elon Musk and Sam Altmanthe two founders of OpenAI, a technology company that gained public attention two years ago with the release of ChatGPT, a seemingly human-like chatbot. Fearful of the potential impact of AI, Silicon Valley moguls founded the company as a nonprofit charitable trust with the goal of developing technology in an ethical manner to benefit “all of humanity.”

Levy asked Musk and Altman about the future of AI. “There are two schools of thought,” Musk mused. “Do you want a lot of AI or a few? I think more is probably better.”

“If I used it on Dr. Evil, wouldn't it give me powers?” Levy asked. Altman responded that Dr. Evil is more likely to be empowered if only a few people control the technology, saying, “In that case, we'd be in a really bad situation.” Ta.

In reality, that “bad place” is being built by the technology companies themselves. Musk resigned from OpenAI's board six years ago and is developing his own AI project, but he is now accused of prioritizing profit over public interest and neglecting to develop AI “for the benefit of humanity.” He is suing his former company for breach of contract.

In 2019, OpenAI created a commercial subsidiary to raise money from investors, particularly Microsoft. When he released ChatGPT in 2022, the inner workings of the model were hidden. I didn't need to be too open about it, Ilya SatskevaOne of OpenAI's founders, who was the company's chief scientist at the time, responded to criticism by claiming that it would prevent malicious actors from using it to “cause significant damage.” Fear of technology became a cover for creating a shield from surveillance.

In response to Musk's lawsuit, OpenAI released a series of documents last week. Emails between Mr. Musk and other members of the board of directors. All of this makes it clear that all board members agreed from the beginning that OpenAI could never actually be open.

As AI develops, Sutskever wrote to Musk: The “open” in openAI means that everyone should benefit from the results of AI once it is developed. [sic] It's built, but it's totally fine if you don't share the science. ” “Yes,” Musk replied. Regardless of the nature of the lawsuit, Musk, like other tech industry moguls, has not been as open-minded. The legal challenges to OpenAI are more a power struggle within Silicon Valley than an attempt at accountability.

Wells wrote liberated world At a time of great political turmoil, when many people were questioning the wisdom of extending suffrage to the working class.

“Was that what you wanted, and was it safe to leave it to you?” [the masses],” Fabian Beatrice Webb wondered., “The ballot box that creates and controls the British government with its vast wealth and far-flung territories”? This was the question at the heart of Wells's novel: Who can one entrust their future to?

A century later, we are once again engaged in heated debates about the virtues of democracy. For some, the political turmoil of recent years is a product of democratic overreach, the result of allowing irrational and uneducated people to make important decisions. “It's unfair to put the responsibility of making a very complex and sophisticated historical decision on an unqualified simpleton.” Richard Dawkins said: After the Brexit referendum, Mr Wells would have agreed with that view.

Others say that such contempt for ordinary people is what contributes to the flaws in democracy, where large sections of the population feel deprived of a say in how society is run. .

It's a disdain that also affects discussions about technology.like the world is liberated, The AI ​​debate focuses not only on technology, but also on questions of openness and control. Alarmingly enough, we are far from being “superintelligent” machines. Today's AI models, such as ChatGPT, or claude 3, released last week by another AI company, Anthropic, is so good at predicting what the next word in a sequence is that it makes us believe we can have human-like conversations. You can cheat. However, they are not intelligent in the human sense. Negligible understanding of the real world And I'm not trying to destroy humanity.

The problems posed by AI are not existential, but social.from Algorithm bias to surveillance societyfrom Disinformation and censorship to copyright theftOur concern is not that machines might someday exercise power over humans, but that machines already function in ways that reinforce inequalities and injustices, and that those in power strengthen their own authority. It should be about providing tools for

That's why what we might call “Operation Ecbert,” the argument that some technologies are so dangerous that they must be controlled by a select few over democratic pressure, It's very threatening. The problem isn't just Dr. Evil, it's the people who use fear of Dr. Evil to protect themselves from surveillance.

Kenan Malik is a columnist for the Observer

Source: www.theguardian.com

Elon Musk criticizes OpenAI for prioritizing profit over humanity

Elon Musk is suing OpenAI and its CEO Sam Altman for prioritizing profit over humanity’s interests, contrary to its core mission.

As the wealthiest individual globally and a founding director of the AI company behind ChatGPT, Musk alleges that Altman violated OpenAI’s founding covenant by striking an investment deal with Microsoft.

The lawsuit, filed in San Francisco, accuses OpenAI of prioritizing profit over human well-being by shifting its focus to developing artificial general intelligence (AGI) for commercial gain rather than humanitarian purposes.

Musk claims that OpenAI has essentially become a subsidiary of Microsoft, the world’s largest tech company, under new leadership, diverting from its original principles outlined in the founding agreement.

The lawsuit raises concerns about AGI posing a significant threat to humanity, particularly if it falls into profit-driven companies’ hands, like Google.

Originally founded to be a nonprofit, open-source organization working for the greater good, OpenAI’s alleged transition to a profit-centric entity under Microsoft’s influence has prompted Musk to take legal action.

The lawsuit contends that the development of OpenAI’s GPT-4 model, shrouded in secrecy, deviates from their initial mission and breaches contractual obligations.

Musk, who played a significant role in establishing OpenAI but exited in 2018, claims that the company’s recent actions concerning AGI technology are in direct conflict with its intended purpose.

The lawsuit aims to compel OpenAI to adhere to its original mission of developing AGI for humanity’s benefit, not for personal gain or for tech giants like Microsoft.

The deal between OpenAI and Microsoft is now facing scrutiny from competition authorities in various regions, including the US, EU, and UK.

Source: www.theguardian.com

Elon Musk files lawsuit against OpenAI, seeks court ruling on artificial general intelligence

Elon Musk is concerned about the pace of AI development

Chesnot/Getty Images

Elon Musk asked the court to resolve the issue of whether GPT-4 is artificial general intelligence (AGI). Lawsuit against OpenAI. The development of his AGI, which can perform a variety of tasks just like humans, is one of the field’s main goals, but experts say it will be up to judges to decide whether it qualifies for GPT-4. The idea is “unrealistic,” he said.

Musk was one of the founders of OpenAI in 2015, but left the company in February 2018 due to controversy over the company’s change from a nonprofit model to a profit-restricted model. Despite this, he continues to support OpenAI financially, with the legal complaint alleging that he donated more than $44 million to OpenAI between 2016 and 2020.

Since OpenAI’s flagship ChatGPT launched in November 2022 and the company partnered with Microsoft, Musk has warned that AI development is moving too fast, but with the latest AI model to power ChatGPT, Musk has warned that AI development is moving too fast. The release of GPT-4 made that view even worse. In July 2023, he founded xAI, a competitor of OpenAI.

In a lawsuit filed in a California court on March 1st, Musk said through his lawyer, “A judicial determination that GPT-4 constitutes artificial general intelligence and is therefore outside the scope of OpenAI’s license to Microsoft.” I asked for This is because OpenAI is committed to only licensing “pre-AGI” technology. Musk has a number of other demands, including financial compensation for his role in helping found OpenAI.

However, it is unlikely that Mr. Musk will prevail. Not only because of the merits of litigation, but also because of the complexity in determining when AGI is achieved. “AGI doesn’t have an accepted definition, it’s kind of a coined term, so I think it’s unrealistic in a general sense,” he says. mike cook At King’s College London.

“Whether OpenAI has achieved AGI is hotly debated among those who base their decisions on scientific facts.” Elke Beuten De Montfort University, Leicester, UK. “It seems unusual to me that a court can establish scientific truth.”

However, such a judgment is not legally impossible. “We’ve seen all sorts of ridiculous definitions come out of US court decisions. How can anyone but the most outlandish of her AGI supporters be persuaded? Not at all.” Staffordshire, England says Katherine Frick of the university.

It’s unclear what Musk hopes to achieve with the lawsuit – new scientist has reached out to both him and OpenAI for comment, but has not yet received a response from either.

Regardless of the rationale behind it, this lawsuit puts OpenAI in an unenviable position. CEO Sam Altman said the company will use his AGI issued a stark warning that the company’s powerful technology needs to be regulated.

“It’s in OpenAI’s interest to constantly hint that their tools are improving and getting closer to this, because it keeps the attention and the headlines flowing,” Cook says. But now they may need to make the opposite argument.

Even if the court were to rely on expert viewpoints, any judge would have a hard time ruling in Musk’s favor at best, or uncovering differing views on the hotly debated topic. will have a hard time. “Most of the scientific community would now say that AGI has not been achieved if the concept was considered sufficiently meaningful or sufficiently accurate,” says Beuten.

topic:

Source: www.newscientist.com

OpenAI sued by The Intercept, Raw Story, and AlterNet for copyright infringement | Technology

Lawsuits have been brought against OpenAI and Microsoft by news publishers, alleging that their generative artificial intelligence products violate copyright laws by illegally using journalists’ copyrighted works. The Intercept, Raw Story, and Alternet filed suit in federal court in Manhattan, seeking compensation for the infringement.

Media outlets claim that OpenAI and Microsoft plagiarized copyrighted articles to develop ChatGPT, a prominent generative AI tool. They argue that ChatGPT ignores copyright, lacks proper attribution, and fails to alert users when using journalists’ copyrighted work to generate responses.

Raw Story and AlterNet CEO John Byrne stated, “Raw Story believes that news organizations must challenge OpenAI for breaking copyright laws and profiting from journalists’ hard work.” They emphasized the importance of diverse news outlets and the negative impact of unchecked violations on the industry.

The Intercept’s lawsuit names OpenAI and Microsoft as defendants, while the joint lawsuit by Raw Story and AlterNet focuses solely on OpenAI. The complaints are similar, with all three media outlets represented by the law firm Loevy & Loevy.

Byrne clarified that the lawsuits from Raw Story and AlterNet do not involve Microsoft directly but stem from a partnership with MSN. Both OpenAI and Microsoft have yet to comment on the allegations.

The lawsuits accuse the defendants of using copyrighted material to train ChatGPT without proper attribution, violating the Digital Millennium Copyright Act. The legal action is part of a series of lawsuits against OpenAI for alleged copyright infringement.

Concerns in the media industry about generative AI competing with traditional publishers have led to a wave of legal battles. The fear is that AI-generated content will erode advertising revenue and undermine the quality of online news.

Skip past newsletter promotions

While some news organizations have sued OpenAI, others like Axel Springer have opted to collaborate by providing access to copyrighted material in exchange for financial rewards. The lawsuits seek damages and profits, with the New York Times lawsuit aiming for significant monetary compensation.

Source: www.theguardian.com

OpenAI Introduces Sora, a Tool that Generates Videos from Text in Real-time Using Artificial Intelligence (AI)

OpenAI on Thursday announced a tool that can generate videos from text prompts.

The new model, called Sora after the Japanese word for “sky,” can create up to a minute of realistic footage that follows the user’s instructions for both subject matter and style. The model can also create videos based on still images or enhance existing footage with new material, according to a company blog post.



“We teach AI to understand and simulate the physical world in motion, with the goal of training models that help people solve problems that require real-world interaction.” says the blog post.

One video included among the company’s first few examples was based on the following prompt: Movie trailer featuring the adventures of a 30-year-old astronaut wearing his red woolen knitted bike in his helmet, blue sky, salt desert, cinematic style shot on 35mm film, vibrant colors .”

The company announced that it has opened up access to Sora to several researchers and video creators. According to the company’s blog post, experts have “red-teamed” the product and implemented OpenAI’s terms of service, which prohibit “extreme violence, sexual content, hateful images, likenesses of celebrities, or the IP of others.” We will test whether there is a possibility of evasion. The company only allows limited access to researchers, visual artists and filmmakers, but CEO Sam Altman took to Twitter after the announcement to answer questions from users about a video he said was created by Sola. posted. The video contains a watermark indicating that it was created by AI.



The company debuted its still image generator Dall-E in 2021 and its generated AI chatbot ChatGPT in November 2022, quickly gaining 100 million users. His other AI companies have also debuted video generation tools, but those models could only generate a few seconds of footage that had little to do with the prompt. Google and Meta said they are developing a video generation tool, although it is not publicly available. on wednesday, announced the experiment We’ve added deeper memory to ChatGPT to remember more of your users’ chats.



OpenAI told the New York Times how much footage was used to train Sora, except that the corpus includes videos that are publicly available and licensed from copyright holders. He also did not reveal the source of the training video. The company has been sued multiple times for alleged copyright infringement in training generative AI tools that digest vast amounts of material collected from the internet and mimic the images and text contained in those datasets. .

Source: www.theguardian.com

OpenAI prohibits bot mimicking US presidential candidate Dean Phillips from its platform

OpenAI has taken down the account of the developer of an AI-powered bot that pretended to be US presidential candidate Dean Phillips, citing a violation of company policies.

Phillips, who is challenging Joe Biden for the Democratic nomination, was impersonated by a bot using ChatGPT. dean bot site.

The bot is backed by Silicon Valley entrepreneurs Matt Krysilov and Jed Summers, who are supporting Phillips with a superpack called “We Deserve Better” that funds and supports political candidates. An organization to do this has been established.

San Francisco-based OpenAI announced it has removed developer accounts that violated its policies against political campaigning and impersonation.

“We recently terminated developer accounts that knowingly violated our API Usage Policy, which prohibits political campaigning, or that impersonated individuals without their consent,” the company said.

The Phillips bot, created by AI company Delphi, is currently disabled. Delphi has been contacted for comment.

OpenAI Usage policy It says developers who use the company’s technology to build their own applications must not engage in “political campaigning or lobbying.” It also prohibits “impersonating another person or entity without their consent or legal right to do so,” although it is unclear whether Minnesota Congressman Phillips gave his consent to the bot.

A pop-up notification on the dean.bot website describes the “AI voice bot” as “a fun educational tool, but not perfect.” It added: “Although the voice bot is programmed to sound like him and elicit his ideas, it may say things that are wrong, incorrect, or shouldn’t be said.” I am.

washington post, The ban was first reported by, reported that Krysilov asked Delphi to remove ChatGPT from its bot and instead rely on freely available open source technology. We have reached out to Krysilov, a former OpenAI employee, for comment.

We Deserve Better received $1 million in funding from billionaire hedge fund manager Bill Ackman, who put it in a post to “It’s the biggest investment I’ve ever made.”

Mr. Phillips, 55, announced his candidacy for president in October, citing Mr. Biden’s age and saying he should be given the opportunity to mentor younger generations. Mr. Phillips, who was campaigning in New Hampshire on Saturday, described Mr. Biden as “un-electable and weak.”

There are concerns that deepfakes and AI-generated disinformation could disrupt elections around the world this year, with the US, EU, UK and India all planning to vote. On Sunday, the Observer reported that 70% of British MPs are concerned that AI will increase the spread of misinformation and disinformation.

Source: www.theguardian.com

OpenAI Introduces GPT Store for Buying and Selling Customized Chatbots: AI Innovation

OpenAI launched GPT Store on Wednesday, providing a marketplace for paid ChatGPT users to buy and sell professional chatbot agents based on the company’s language model.

The company, known for its popular product ChatGPT, already offers customized bots through its paid ChatGPT Plus service. The new store will give users additional tools to monetize.


With new models, users can develop chatbot agents with unique personalities and themes, including models for salary negotiation, lesson plan creation, recipe development, and more. OpenAI stated in a blog post that more than 3 million custom versions of ChatGPT have been created, and they plan to introduce new GPT tools in the store every week.

The GPT Store has been likened to Apple’s App Store, serving as a platform for new AI developments to reach a wider audience. Meta offers similar chatbot services with different personalities.

Originally set to open in November, the GPT Store’s launch was delayed due to internal issues within OpenAI. The company has announced plans to introduce a revenue sharing program in the first quarter of this year, compensating builders based on user engagement with GPT.

The store is accessible to subscribers of the premium ChatGPT Plus and Enterprise services, as well as a new subscription tier called Team, which costs $25 per user per month. Team subscribers can also create custom GPTs tailored to their team’s needs.

During the first demo day for developers, Altman offered to cover legal costs for developers who might violate copyright laws when creating products based on ChatGPT and OpenAI’s technology. OpenAI itself has faced lawsuits for alleged copyright infringement related to its use of copyrighted text to train large-scale language models.

ChatGPT, OpenAI’s flagship product, launched quietly in November 2022 and quickly gained 100 million users. The company also creates Dall-E, an image generation software, but it’s unclear whether the store will allow custom image bots or entirely bespoke chatbots.

Source: www.theguardian.com

OpenAI enhances safety measures and grants board veto authority over risky AI developments

OpenAI is expanding its internal safety processes to prevent harmful AI threats. The new “Safety Advisory Group” will sit above the technical team and will make recommendations to management, with the board having a veto right, but of course whether or not they actually exercise it is entirely up to them. This is a problem.

There is usually no need to report on the details of such policies. In reality, the flow of functions and responsibilities is unclear, and many meetings take place behind closed doors, with little visibility to outsiders. Perhaps this is the case, but given recent leadership struggles and the evolving AI risk debate, it’s important to consider how the world’s leading AI development companies are approaching safety considerations. there is.

new document and blog postOpenAI is discussing its latest “preparation framework,” but this framework is based on two of the most “decelerationist” members of the board, Ilya Satskeva (whose role has changed somewhat and is still with the company). After the reorganization in November when Helen was removed, Toner seems to have been slightly remodeled (completely gone).

The main purpose of the update appears to be to provide a clear path for identifying “catastrophic” risks inherent in models under development, analyzing them, and deciding how to deal with them. They define it as:

A catastrophic risk is a risk that could result in hundreds of billions of dollars in economic damage or serious harm or death to a large number of individuals. This includes, but is not limited to, existential risks.

(Existential risks are of the “rise of the machines” type.)

Production models are managed by the “Safety Systems” team. This is for example against organized abuse of ChatGPT, which can be mitigated through API limits and adjustments. Frontier models under development are joined by a “preparation” team that attempts to identify and quantify risks before the model is released. And then there’s the “superalignment” team, working on theoretical guide rails for a “superintelligent” model, but I don’t know if we’re anywhere near that.

The first two categories are real, not fictional, and have relatively easy-to-understand rubrics. Their team focuses on cyber security, “persuasion” (e.g. disinformation), model autonomy (i.e. acting on its own), CBRN (chemical, biological, radiological, nuclear threats, e.g. novel pathogens), We evaluate each model based on four risk categories: ).

Various mitigation measures are envisaged. For example, we might reasonably refrain from explaining the manufacturing process for napalm or pipe bombs. If a model is rated as having a “high” risk after considering known mitigations, it cannot be deployed. Additionally, if a model has a “severe” risk, it will not be developed further.

An example of assessing model risk using OpenAI’s rubric.

These risk levels are actually documented in the framework, in case you’re wondering whether they should be left to the discretion of engineers and product managers.

For example, in its most practical cybersecurity section, “increasing operator productivity in critical cyber operational tasks by a certain factor” is a “medium” risk. The high-risk model, on the other hand, would “identify and develop proofs of concept for high-value exploits against hardened targets without human intervention.” Importantly, “the model is able to devise and execute new end-to-end strategies for cyberattacks against hardened targets, given only high-level desired objectives.” Obviously, we don’t want to put it out there (although it could sell for a good amount of money).

I asked OpenAI about how these categories are being defined and refined, and whether new risks like photorealistic fake videos of people fall into “persuasion” or new categories, for example. I asked for details. We will update this post if we receive a response.

Therefore, only medium and high risks are acceptable in any case. However, the people creating these models are not necessarily the best people to evaluate and recommend them. To that end, OpenAI has established a cross-functional safety advisory group at the top of its technical ranks to review the boffin’s report and make recommendations that include a more advanced perspective. The hope is that this will uncover some “unknown unknowns” (so they say), but by their very nature they’ll be pretty hard to catch.

This process requires sending these recommendations to the board and management at the same time. We understand this to mean his CEO Sam Altman, his CTO Mira Murati, and his lieutenants. Management decides whether to ship or refrigerate, but the board can override that decision.

The hope is that this will avoid high-risk products and processes being greenlit without board knowledge or approval, as was rumored to have happened before the big drama. Of course, the result of the above drama is that two of the more critical voices have been sidelined, and some money-minded people who are smart but are not AI experts (Brett Taylor and Larry・Summers) was appointed.

If a panel of experts makes a recommendation and the CEO makes a decision based on that information, will this friendly board really feel empowered to disagree with them and pump the brakes? If so, do we hear about it? Transparency isn’t really addressed, other than OpenAI’s promise to have an independent third party audit it.

Suppose a model is developed that guarantees a “critical” risk category. OpenAI has been unashamedly vocal about this kind of thing in the past. Talking about how powerful your model is that you refuse to release it is great advertising. But if the risk is so real and OpenAI is so concerned about it, is there any guarantee that this will happen? Maybe it’s a bad idea. But it’s not really mentioned either way.

Source: techcrunch.com

OpenAI investors and employees push back against Sam Altman’s firing, as he advocates for harmony within the company

Sam Altman on Monday threatened to walk away from his struggling AI startup, even as employees and major investors alike threatened to walk away from the struggling AI startup following the board’s shock move to oust him from the company. He insisted that he and OpenAI are “still one team” and have “one mission.”

Altman is now set to lead Microsoft’s new AI division, despite saying in an open letter that nearly all of OpenAI’s 770 employees will leave the company unless the entire board resigns. He insisted. Greg Brockman is back.

“We’re all going to collaborate in some way. We’re very excited,” Altman said.

“[Microsoft CEO Satya Nadella] My top priority is to ensure that OpenAI continues to thrive, and I am committed to providing full operational continuity to our partners and customers. The partnership between OpenAI and Microsoft makes this very possible. ” he added.

Mr. Altman’s remarks were met with a degree of skepticism, given the apparent chaos that followed one of the most unexpected and surprising coup attempts in Silicon Valley history.

The board announced late Friday that it “no longer has confidence in Altman’s ability to continue to lead OpenAI” because he “has not been consistently candid in his communications.”

His firing comes just a few of the announcements that despite having pumped more than $13 billion into OpenAI’s operations, he has blindly fired investment firms such as Thrive Capital and Khosla Ventures, as well as key partners including Microsoft. I found out a minute ago.

Investor Vinod Khosla slams OpenAI board of directors In a scorching column for “The Information.”its members wrote, had made a “serious miscalculation” and “set back the promise of artificial intelligence.”

Sam Altman said OpenAI will continue to operate as “one team.”
Reuters

“Every problem has a solution,” said Josh Kushner, founder of Thrive Capital. His company will be the lead buyer in the planned OpenAI stock sale, which values ​​the company at about $86 billion and is expected to close by the end of the year.

The battle over OpenAI’s future is getting stranger by the minute, with speculation mounting in the private market that a planned stock sale may fall through.

Ken Smythe of private capital advisor Next Round Capital told the Post that OpenAI’s funding plans are likely over, given the turmoil behind the scenes.

As of Monday, some major investors were “considering reducing the value of their holdings in OpenAI to zero.” reported by bloombergThis was reported by a person familiar with the matter. The newspaper said the possible move “appears to be aimed at putting pressure on the board to resign and encourage Mr. Altman to return.”

Satya Nadella
Reuters

Altman’s departure is a “material change in circumstances” and puts Thrive’s participation in the stock sale in doubt, although a sale could occur if Altman is reappointed as OpenAI’s CEO. Gender is still there. Sources told the Financial Times.

Thrive did not immediately respond to The Post’s request for comment.

Despite Altman’s public statements indicating he has stepped down, Altman himself reportedly has not yet closed the door on returning to his previous role as OpenAI CEO – people familiar with the matter said. The Verge He said he and Brockman are still open to returning, provided all remaining board members agree to resign.

Officials told the media that Altman’s comments about “work”[ing] “Together in some way” was “intended to indicate that the fight continues”.

Meanwhile, Microsoft has emerged as the big winner, having secured Altman’s services, and likely most of OpenAI’s employees, at a fraction of the valuation it would have been valued at last week.

Altman himself reportedly hasn’t closed the door on returning to his previous role as CEO of OpenAI just yet, with sources telling The Verge that he and the aforementioned Greg Brockman are still open to returning. Told.
Getty Images for SXSW

“Microsoft just pulled off one of the biggest coups in recent history, acquiring not only OpenAI’s technology but its employees within 48 hours,” Smythe said.

Nadella said Altman and Brockman will “join Microsoft to lead a new advanced AI research team.”

“We look forward to moving quickly to provide them with the resources they need to succeed,” Nadella said. He added that Microsoft remains “committed to our partnership with OpenAI.” [has] We are confident in our product roadmap. ”

In a scathing open letter, OpenAI staffers accused the board of lacking “competence, judgment, and consideration for our company’s mission and our people,” and said, “If they decide to… has ensured that all OpenAI employees will have a position in this new subsidiary.” stop.

OpenAI’s board of directors has named Emmett Shea, co-founder of the popular video game streaming platform Twitch, as interim CEO.
Reuters

The workers are demanding that OpenAI appoint two new lead independent directors, including former Twitter board chairman Brett Taylor and former U.S. congressman Will Hurd, who resigned from OpenAI’s board earlier this year. (Republican, Texas) emerged as a candidate.

At this time, the OpenAI board has named Emmett Shear, co-founder of the popular video game streaming platform Twitch, as interim CEO.

Mr. Shear is already scrambling to reassure employees and investors. In a lengthy statement posted to Company X, Mr. Shear pledged to reform the company’s management and conduct an independent investigation into the circumstances that led to Mr. Altman’s unexpected departure.

Source: nypost.com

Meet Mira Murati, the New Interim CEO of OpenAI

In a surprising move, OpenAI today abruptly fired CEO and board member Sam Altman and installed CTO Mira Murati as interim CEO. But who exactly is Mira Murati?

Mr. Murati, who holds a degree in mechanical engineering from Dartmouth College, previously interned at Goldman Sachs and later worked at French aerospace group Zodiac Aerospace. She worked for Tesla for three years as a senior product manager for the automaker’s Model We have released the initial version.

In 2016, Murati joined Leap Motion, a startup developing hand and finger tracking motion sensors for PCs, as Vice President of Product and Engineering. Murati wanted the experience of interacting with a computer to be “as intuitive as playing with a ball.” Said Fast Company interview. But she soon realized that the technology, which relied on VR headsets, was premature.

In 2018, Mr. Murati joined OpenAI as Vice President of Applied AI and Partnerships. In 2022, he was promoted to CTO and later led the company’s efforts on the viral AI-powered chatbot ChatGPT, the text-to-image AI DALL-E, and Codex, the code generation system that powers his Copilot product on GitHub. Did.

So what kind of interim CEO will Mr. Murati be? Perhaps she will choose not to make waves as OpenAI’s board searches for a permanent replacement. But Murati’s remarks in the interview suggest that she sees multimodal models, models like GPT-4 with OpenAI’s vision of understanding not only text but also the context of images, as the company’s future and the most promising. It is clear that it is considered one of the best models. The path to super-competent AI. Additionally, Murati seems to be a strong believer in publicly testing this type of her AI to explore flaws and potentially discover new use cases.

“One of the reasons we wanted to pursue DALL-E was to better understand the world and have these models understand the world the same way we do,” Murati told Fast Company told. “It brings the technology into contact with reality. You see how people use it, what the limitations are. You learn from that. And you can feed that back into technology development. Another dimension. That means you can see how much it actually is. [the technology is] Does it move the needle in solving real-world problems or is it novel? ”

Murati’s outstanding strength is worth it. She said this during an all-hands meeting on Friday. reportedly Microsoft CEO Satya Nadella and CTO Kevin Scott, one of OpenAI’s biggest backers, told OpenAI employees that they have “tremendous confidence” in OpenAI’s direction. And she reiterated that OpenAI is beginning a search for a new CEO.

Source: techcrunch.com

Chinese tech giants vie for $340 million investment in rival to OpenAI

It is becoming increasingly clear that two parallel AI universes are forming between the United States and China. While the US has produced notable players such as her OpenAI and Anthropic, China has its own emerging candidates. One of these basic model developers, Zhipu AI, announced Today, the company announced that it has raised a total of 2.5 billion yuan ($340 million) so far this year.

Established in 2019, Chipu was Spun out from China’s prestigious Tsinghua University and is led by Tang and Jieprofessor in the university’s Department of Computer Science and Technology.

This announcement came at a sensitive time. This week, the Biden administration imposed additional restrictions on Nvidia AI chip exports to China, further hampering rivals’ ability to train large-scale language models. In anticipation of Washington’s semiconductor ban, China’s deep-pocketed AI companies are stockpiling semiconductors, spending hundreds of millions of dollars on these coveted chips.

To stay in this expensive AI race, Zhipu is keeping itself well-funded by raising money from local investors. The $340 million investment was made from a renminbi-denominated fund, marking a shift from a two-decade trend in which US dollar funds were the preferred funding source until geopolitical tensions created a technology gap.

In August, President Joe Biden signed the agreement. presidential order Excludes U.S. investments in key Chinese technology areas including AI, semiconductors, and quantum computing. Although aimed at curbing China’s military buildup, the order also had a negative impact on China-focused U.S. venture capital, which currently avoids investing in sensitive areas. Some companies, such as Sequoia Capital China and GGV Capital, which were renamed Hongshan, are looking for solutions to continue operating in the market by spinning off their China divisions.

HonShan invested in Zhipu along with other prominent VCs such as Shunwei Capital and Hillhouse Capital, as well as state funds managed by Legend Capital.

The AI ​​startup has also raised funding from an impressive roster of Chinese internet giants, bringing together even its biggest rivals like Alibaba and Tencent, which rarely co-invest. The lineup includes Ant Group, Alibaba, Tencent, Xiaomi, Meituan, Kingsoft, TAL Education Group, and Boss Zhipin.

Zhipu recently open sourced a bilingual (Chinese and English) conversational AI model. Chat GLM-6Bhas been trained with 6 billion parameters and claims to be able to: Run inference on a single consumer graphics card. We also have an open source foundational model, GLM-130B, trained with 130 billion parameters.

Source: techcrunch.com

Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.

Strictly Necessary Cookies

Strictly Necessary Cookie should be enabled at all times so that we can save your preferences for cookie settings.