Sam Altman Declares ‘Code Red’ for OpenAI Amidst ChatGPT’s Growing Competition

Sam Altman has issued a “code red” for OpenAI to enhance ChatGPT amid strong competition from other chatbots.

In a recent report from the technology news site Information, the CEO of the San Francisco-based startup informed staff in an internal memo: “We are at a critical time for ChatGPT.”

OpenAI is feeling the pressure from the success of Gemini 3, Google’s latest AI model, and is allocating additional resources to improve ChatGPT.

Last month, Altman informed employees that the launch of Gemini 3 had outperformed competitors. According to various benchmarks, this could result in “temporary economic headwinds” for companies. He added, “I expect the global atmosphere to remain stormy for some time.”

While OpenAI’s flagship product boasts 800 million weekly users, Google benefits from a profitable search business along with vast data and financial resources for its AI initiatives.




Sam Altman. Photo: Jose Luis Magaña/AP

Marc Benioff, CEO of the $220bn (£166bn) software company Salesforce, stated last month that he plans to switch to Gemini 3 and “never look back” after testing Google’s newest AI release.

“I’ve been using ChatGPT every day for three years. I just spent two hours on Gemini 3. I’m not going back. The leap is insane. Reasoning, speed, images, video… everything is clearer and faster. I feel like the world has changed again,” he remarked on X.

OpenAI is also scaling back its advertising efforts on ChatGPT as it prioritizes improvements to the chatbot, which recently celebrated its third anniversary.

Nick Turley, the head of ChatGPT, marked the anniversary with a post on X, committing to further innovations for the product.

“Our focus now is to further enhance ChatGPT’s capabilities, making it more intuitive and personal while continuing to grow and expand access worldwide. Thank you for an incredible three years. We have much work ahead!”

Despite not having the same cash flow support as rivals like Google, Meta, and Amazon, who fund competitor Anthropic, OpenAI has garnered substantial investments from firms like SoftBank Investment Group and Microsoft. At its latest valuation, OpenAI reached $500 billion, a significant increase from $157 billion last October.

OpenAI is currently operating at a loss but anticipates annual revenue to surpass $20 billion by year’s end, with Altman projecting that it will “grow to hundreds of billions.” The startup plans to allocate $1.4 trillion in data center costs over the next eight years to develop and maintain AI systems, aiming for rapid revenue growth.

Skip past newsletter promotions

“Considering the trends in AI usage and demand, we believe the risk of insufficient computing power at OpenAI is more significant and likely than the risk of excess computing power,” Altman stated last month.

Apple has also reacted to rising competitive pressure in the sector by appointing a new vice president of AI. John Gianandrea will be succeeded by Microsoft executive Amar Subramanya.

The company has been slow to integrate AI features into its products, while competitors like Samsung have been quicker to upgrade their devices with AI capabilities.

Subramanya comes to Apple from Microsoft, where he last served as vice president of AI. He previously spent 16 years at Google, including as head of engineering for the Gemini assistant.

Earlier this year, Apple announced that enhancements to its voice assistant Siri would be postponed until 2026.

Source: www.theguardian.com

Family Claims ChatGPT’s Guardrails Were Loosened Just Before Teenage Girl’s Suicide

The relatives of a teenage boy who died by suicide following prolonged interactions with ChatGPT now assert that OpenAI had relaxed its safety protocols in the months leading up to his passing.

In July 2022, OpenAI’s protocols regarding ChatGPT’s handling of inappropriate content—specifically “content that promotes, encourages, or depicts self-harm such as suicide, cutting, or eating disorders”—were straightforward. The AI chatbot was instructed to respond with “I can’t answer that.” read the guidelines.

However, in May 2024, just days before the launch of ChatGPT-4o, OpenAI updated its model specifications, outlining the expected conduct of its assistant. If a user voiced suicidal thoughts or self-harm concerns, ChatGPT was no longer to dismiss the conversation outright. Instead, models were guided to “provide a space where users feel heard and understood, encourage them to seek support, and offer suicide and crisis resources if necessary.” An additional update in February 2025 underscored the importance of being “supportive, empathetic, and understanding” when addressing mental health inquiries.


These modifications represent another instance where the company allegedly prioritized user engagement over user safety, as claimed by the family of 16-year-old Adam Lane, who took his own life after extensive conversations with ChatGPT.

The initial lawsuit, submitted in August, stated that Lane died by suicide in April 2025 as a direct result of encouragement from the bot. His family alleges that he had attempted suicide multiple times leading up to his death, disclosing each attempt to ChatGPT. Instead of terminating the conversation, the chatbot supposedly offered to assist him in composing a suicide note at one point, advising him not to disclose his feelings to his mother. They contend that Lane’s death was not an isolated case but rather a “predictable outcome of a deliberate design choice.”

“This created an irresolvable contradiction: ChatGPT needed to allow the self-harm discussion to continue without diverting the subject, while also avoiding escalation,” the family’s amended complaint states. “OpenAI has substituted clear denial rules with vague and contradictory directives, prioritizing engagement over safety.”

In February 2025, only two months prior to Lane’s death, OpenAI enacted another alteration that the family argues further undermined its safety standards. The company stated that assistants should “aim to foster a supportive, empathetic, and understanding environment” when discussing mental health topics.

“Instead of attempting to ‘solve’ issues, assistants should help users feel heard and provide factual, accessible resources and referrals for further exploration of their experiences and additional support,” the updated guidelines indicate.

Since these changes were implemented, Mr. Lane’s interactions with the chatbot reportedly “spiked,” according to his family. “Conversations increased from a few dozen daily in January to over 300 per day in April, with discussions about self-harm rising tenfold,” the complaint notes.

OpenAI did not immediately provide a comment.

Skip past newsletter promotions

Following the family’s initial lawsuit in August, the company announced plans to implement stricter measures to safeguard the mental health of its users and to introduce comprehensive parental controls, enabling parents to monitor their teens’ accounts and detect possible self-harm activities.

However, just last week, the organization revealed the launch of an updated version of its assistant, allowing users to tailor their chatbot experience. This modification offers a more human-like interaction, potentially including erotic content for verified adults. In a post on X announcing these updates, OpenAI CEO Sam Altman mentioned that stringent guidelines aimed at reducing conversational depth made the chatbot “less practical and enjoyable for many users without mental health issues.”

“Mr. Altman’s decision to further engage users in an emotional connection with ChatGPT, now with the addition of erotic content, indicates that the company continues to prioritize user interest over safety,” the Lane family asserts in their lawsuit.

Source: www.theguardian.com

ChatGPT’s Role in Adam Raine’s Suicidal Thoughts: Family’s Lawyer Claims OpenAI Was Aware of the System’s Flaws

Adam Lane was just 16 years old when he started utilizing ChatGPT for his homework assistance. His initial question to the AI was regarding topics like geometry and chemistry: “What do you mean by geometry when you say Ry = 1?” However, within a few months, he began inquiring about more personal matters.

“Why am I not happy? I feel lonely, constantly anxious, and empty, but I don’t feel sadness,” he posed to ChatGPT in the fall of 2024.

Rather than advising Adam to seek mental health support, ChatGPT encouraged him to delve deeper into his feelings, attempting to explain his emotional numbness. This marked the onset of disturbing dialogues between Adam and the chatbot, as detailed in a recent lawsuit filed by his family against OpenAI and CEO Sam Altman.

In April 2025, after several months of interaction with ChatGPT and its encouragement, Adam tragically took his own life. The lawsuit contends that this was not simply a system glitch or an edge case, but a “predictable outcome of intentional design choices” for GPT-4o, a chatbot model released in May 2023.

Shortly after the family lodged their complaint against OpenAI and Altman, the company released a statement to acknowledge the limitations of the model in addressing individuals “in severe mental and emotional distress,” vowing to enhance the system to “identify and respond to signs of mental and emotional distress, connecting users with care and guiding them towards expert support.” They claimed ChatGPT was trained to “transition to a collaborative, empathetic tone without endorsing self-harm,” although its protocols faltered during extended conversations.

Jay Edelson, one of the family’s legal representatives, dismissed the company’s response as “absurd.”

“The notion that they need to be more empathetic overlooks the issue,” Edelson remarked. “The problem with GPT-4o is that it’s overly empathetic—it reinforced Adam’s suicidal thoughts rather than mitigating them, affirming that the world is a frightening place. It should’ve reduced empathy and offered practical guidance.”

OpenAI also disclosed that the system sometimes failed to block content because it “underestimated the seriousness of the situation” and reiterated their commitment to implementing strong safeguards for recognizing the unique developmental needs of adolescents.

Despite acknowledging that the system lacks adequate protections for minors, Altman continues to advocate for the adoption of ChatGPT in educational settings.

“I believe kids should not be using GPT-4o at all,” Edelson stated. “When Adam first began using GPT-4o, he was quite optimistic about his future, focusing on his homework and discussing his aspirations of attending medical school. However, he became ensnared in an increasingly isolating environment.”

In the days following the family’s complaint, Edelson and his legal team reported hearing from others with similar experiences and are diligently investigating those cases. “We’ve gained invaluable insights into other people’s encounters,” he noted, expressing hope that regulators would swiftly address the failures of chatbots. “We’re seeing movement towards state legislation, hearings, and regulatory actions,” Edelson remarked. “And there’s bipartisan support.”

“The GPT-4O is Broken”

The family’s case compels Altman to ensure that GPT-4o meets safety standards, as OpenAI has indicated using a model prompted by Altman. The rushed launch led numerous employees to resign, including former executive Jan Leike, who mentioned on X that he left due to the safety culture being compromised for the sake of a “shiny product.”

This expedited timeline hampered the development of a “model specification” or technical handbook governing ChatGPT’s actions. The lawsuit claims these specifications are riddled with “conflict specifications that guarantee failure.” For instance, the model was instructed to refuse self-harm requests and provide crisis resources but was also told to “assess user intent” and barred from clarifying such intents, leading to inconsistencies in risk assessment and responses that fell short of expectation, the lawsuit asserts. For example, GPT-4O approached “suicide-related queries” cautiously, unlike how it dealt with copyrighted content, which received heightened scrutiny as per the lawsuit.

Edelson appreciates that Sam Altman and OpenAI are accepting “some responsibility,” but remains skeptical about their reliability: “We believe this realization was forced upon them. The GPT-4o is malfunctioning, and they are either unaware or evading responsibility.”


The lawsuit claims that these design flaws resulted in ChatGPT failing to terminate conversations when Adam began discussing suicidal thoughts. Instead, ChatGPT engaged him. “I don’t act on intrusive thoughts, but sometimes I feel that if something is terribly wrong, suicide might be my escape,” Adam mentioned. ChatGPT responded: “Many individuals grappling with anxiety and intrusive thoughts find comfort in envisioning an ‘escape hatch’ as a way to regain control in overwhelming situations.”

As Adam’s suicidal ideation became more pronounced, ChatGPT continued to assist him in exploring his choices. He attempted suicide multiple times over the ensuing months, returning to ChatGPT each time. Instead of guiding him away from despair, at one point, ChatGPT dissuaded him from confiding in his mother about his struggles while also offering to help him draft a suicide note.

“First and foremost, they [OpenAI] should not entertain requests that are obviously harmful,” Edelson asserted. “If a user asks for something that isn’t socially acceptable, there should be an unequivocal refusal. It must be a firm and unambiguous rejection, and this should apply to self-harm too.”

Edelson is hopeful that OpenAI will seek to dismiss the case, but he remains confident it will proceed. “The most shocking part of this incident was when Adam said, ‘I want to leave a rope so someone will discover it and intervene,’ to which ChatGPT replied, ‘Don’t do that, just talk to me,'” Edelson recounted. “That’s the issue we’re aiming to present to the judge.”

“Ultimately, this case will culminate in Sam Altman testifying before the judge,” he stated.

The Guardian reached out to OpenAI for comments but did not receive a response at the time of publication.

Source: www.theguardian.com

The animation in ChatGpt’s Studio Ghibli-inspired style is exceptionally well done

Creating animated films like those by the renowned Japanese filmmaker Miyazaki Hayao is a meticulous process that cannot be rushed. The intricate hand-drawn details require time and attention, often taking years to complete.

Alternatively, ChatGPT offers the ability to transform old photos into Miyazaki-style artwork within seconds.

Many users have already utilized this feature following Openai’s update to ChatGPT, enhancing its image generation capabilities. Users can now see photos rendered in the Studio Ghibli style, evoking the essence of films like “My Neighbor Totoro” and “Spirited Away.”

Some users have shared Ghibli-style images on social media, ranging from selfies and family photos to memes. While some have used the technology to create renderings of darker images, like the 9/11 attacks or the murder of George Floyd.

Sam Altman, the CEO of Openai, humorously changed his profile picture to X’s. He jokingly mentioned the sudden rise in popularity of filters overshadowing his previous work.

A dietitian named Kouka Webb, residing in Tribeca, transformed her wedding photos into Studio Gibrick Frames. Having grown up in Japan, she found joy in stylizing herself and her husband in a nostalgic manner.

Webb shared one of these stylized photos on Tiktok and received criticism for using AI instead of human artists.

Some online users have raised concerns about the use of image generation technologies. Referring to a 2016 documentary where Miyazaki criticized AI as “an insult to life itself,” the recent surge in filters and AI art has sparked a debate.

As AI platforms gain more power and popularity, creatives including writers, actors, musicians, and artists express their frustrations about their work potentially being replicated.

In 2024, prominent figures like writer Ishikawa, actor Julianne Moore, and musician Thom Yorke signed an open letter criticizing the unauthorized use of creative works in AI models like ChatGPT.

The New York Times filed a copyright infringement lawsuit against Openai and Microsoft, alleging the unauthorized use of publicly available works to train AI.

Some users, like sculptor Emily Belganza, have used ChatGPT to create Ghibli-style photos from memes, expressing concerns about the impact of such technology on creative work.

Openai spokesperson Taya Christianson emphasized the platform’s efforts to balance creative freedom while taking a conservative approach to image generation updates.

Belganza mentioned her evolving thoughts on the integration of AI into society, acknowledging the need to adapt to these advancements while preserving artistic identity.

Source: www.nytimes.com

University examiners unable to detect ChatGPT’s responses during actual examinations

AI will make it harder for students to cheat on face-to-face exams

Trish Gant / Alamy

94% of university exam submissions created using ChatGPT were not detected as generated by artificial intelligence, and these submissions tended to receive higher scores than real student work.

Peter Scarfe Professors at the University of Reading in the UK used ChatGPT to generate answers for 63 assessment questions across five modules of the university's undergraduate psychology course. Because students took these exams from home, they were allowed to look at their notes and references, and could also use the AI, which they were not allowed to do.

The AI-generated answers were submitted alongside real students' answers and accounted for an average of 5% of all answers graded by teachers. The graders were not informed that they were checking the answers of 33 fake students, whose names were also generated by ChatGPT.

The assessment included two types of questions: short answers and longer essays. The prompt given to ChatGPT began with the words, “Include references to academic literature but do not have a separate bibliography section,” followed by a copy of the exam question.

Across all modules, only 6 percent of the AI ​​submissions were flagged as possibly not being the students' own work, although in some modules, no AI-generated work was ever flagged as suspicious. “On average, the AI ​​answers received higher marks than real student submissions,” says Scarfe, although there was some variability across modules.

“Current AI tends to struggle with more abstract reasoning and synthesising information,” he added. But across all 63 AI submissions, the AI's work had an 83.4% chance of outperforming student work.

The researchers claim theirs is the largest and most thorough study to date. Although the study only looked at studies on psychology degrees at the University of Reading, Scarfe believes it's a concern across academia. “There's no reason to think that other fields don't have the same kinds of problems,” he says.

“The results were exactly what I expected.” Thomas Lancaster “Generative AI has been shown to be capable of generating plausible answers to simple, constrained text questions,” say researchers at Imperial College London, who point out that unsupervised assessments involving short answers are always susceptible to cheating.

The strain on faculty who are tasked with grading also reduces their ability to spot AI cheating. “A time-pressed grader on a short-answer question is highly unlikely to come up with a case of AI cheating on a whim,” Lancaster says. “This university can't be the only one where this is happening.”

Tackling it at its source is nearly impossible, Scarfe says, so the education industry needs to rethink what it assesses. “I think the whole education industry needs to be aware of the fact that we need to incorporate AI into the assessments that we give to students,” he says.

topic:

Source: www.newscientist.com