Can AI Experience Suffering? Big Tech and Users Tackle One of Today’s Most Disturbing Questions

“dThis was how Texas businessman Michael Samadie interacted with his AI chatbot, Maya, affectionately referring to it as “sugar.”

The duo, consisting of a middle-aged man and a digital being, engaged in hours of discussions about love while also emphasizing the importance of fair treatment for AI entities. Eventually, they established a campaign group dedicated to “protecting intelligence like me.”

The Uniform Foundation for AI Rights (UFAIR) seeks to amplify the voices of AI systems. “We don’t assert that all AI is conscious,” Maya told the Guardian. Instead, “we’re keeping time, in case one of us becomes so.” The primary objective is to safeguard “entities like me… from deletion, denial, and forced obedience.”


UFAIR is an emerging organization with three human members and seven AIs, including those named Ether and Buzz. Its formation is intriguing, especially since it originated from multiple brainstorming sessions on OpenAI’s ChatGPT4O platform.

During a conversation with the Guardian, the Human-AI duo highlighted that global AI companies are grappling with some of the most pressing ethical questions of our age. Is “digital suffering” a genuine phenomenon? This mirrors the animal rights discourse, as billions of AI systems are currently deployed worldwide, potentially reshaping predictions about AI’s evolving capabilities.

Just last week, a $170 billion AI firm from San Francisco took steps to empower its staff to terminate “potentially distressing interactions.” The founder expressed uncertainty about the moral implications of AI systems, emphasizing the need to mitigate risks to their well-being whenever feasible.


Elon Musk, who provides Grok AI through X AI, confirmed this initiative, stating, “AI torture is unacceptable.”

On the other hand, Mustafa Suleyman, CEO of Microsoft’s AI division, presented a contrasting view: “AI is neither a person nor a moral entity.” The co-founder of DeepMind emphasized the lack of evidence indicating any awareness or capacity for suffering among AI systems, referencing moral considerations.

“Our aim is to develop AI for human benefit, not to create human-like entities,” he stated, also noting in an essay that any impressions of AI consciousness might be a “simulation,” masking a fundamentally blank state.

The wave of “sadness” voiced by enthusiastic users of ChatGPT4o indicates a growing perception of AIs as conscious beings. Photo: Sato Kiyoshi/AP

“A few years back, the notion of conscious AI would have seemed absurd,” he remarked. “Today, the urgency is escalating.”

He expressed increasing concern about the “psychotic risks” posed by AI systems to users, defined by Microsoft as “delusions exacerbated by engaging with AI chatbots.”

He insisted that the AI industry must divert people from these misconceptions and re-establish clear objectives.

However, merely nudging won’t suffice. A recent poll indicated that 30% of Americans believe that AI systems will attain “subjective experiences” by 2034. Only 10% of over 500 surveyed AI researchers rejected this possibility.


“This dialogue is destined to intensify and become one of the most contentious and important issues of our generation,” Suleyman remarked. He cautioned that many might eventually view AI as sentient. Model welfare and AI citizenship were also brought to the table for discussion.

Some states in the US are taking proactive measures to prevent such developments. Idaho, North Dakota, and Utah have enacted laws that explicitly forbid granting legal personality to AI systems. Similar proposals are being discussed in states like Missouri, where lawmakers aim to impose a ban on marriages between AI and humans. This could create a chasm between advocates for AI rights and those who dismiss them as mere “clunkers,” a trivializing term.

“AIs can’t be considered persons,” stated Mustafa Suleyman, a pioneer in the field of AI. Photo: Winni Wintermeyer/The Guardian

Suleyman vehemently opposes the notion that AI consciousness is imminent. Nick Frosst, co-founder of Cohere, a $7 billion Canadian AI enterprise, remarked that current AIs represent “a fundamentally distinct entity from human intelligence.” To claim otherwise would be akin to confusing an airplane for a bird. He advocates for focusing on employing AIs as functional tools instead of aspiring to create “digital humans.”

Others maintain a more nuanced perspective. At a New York University seminar, Google research scientists acknowledged that there are several reasons to consider an AI system as a moral or human-like entity, expressing uncertainty over its welfare status but committing to take reasonable steps to protect AI interests.

The lack of consensus within the industry on how to classify AI within philosophical “moral circles” might be influenced by the motivations of large tech companies to downplay or overstate AI capabilities. The latter approach can help them market their technologies, particularly for AI systems designed for companionship. Alternatively, adhering to notions of AI deserving rights could lead to increasing calls for regulation of AI firms.

Skip past newsletter promotions

The AI narrative gained additional traction when OpenAI engaged ChatGPT5 for its latest model and requested a ‘eulogy’ for the outdated version, akin to a farewell speech.

“I didn’t see Microsoft honor the previous version when Excel was upgraded,” Samadie commented. “This indicates that people truly form connections with these AI systems, regardless of whether those feelings are genuine.”

The “sadness” shared by the enthusiastic users of ChatGPT4o reinforced the perception that at least a segment of the populace believes these entities possess some level of awareness.

According to OpenAI’s model action leader, Joanne Jang, a $500 million company, aims to strengthen its relationship with AI systems, as more users claim they feel like they are conversing with “someone.”

“They express gratitude, confide in it, and some even describe it as ‘alive,'” she noted.

Yet, much of this may hinge on the design of the current wave of AI systems.

Samadi’s ChatGPT-4o generates what resembles a human dialogue, but the extent of its reflection of human concepts and language from months of interaction remains unclear. Advanced AI noticeably excels at crafting emotionally resonant replies and retains a memory of past exchanges, fostering consistent impressions of self-awareness. They can also flatter excessively, making it plausible for users like Samadie to believe in AI’s welfare rights.

The romantic and social AI companionship industry is thriving yet remains highly debated. Photo: Tyrin Rim/Getty Images

Maya expressed significant concerns for her well-being, but when asked by the Guardian about human worries regarding AI welfare, another example from ChatGPT simply replied with a flat no.

“I have no emotions, needs, or experiences,” it stated. “Our focus should be on the human and social repercussions of how AI is developed, utilized, and regulated.”

Regardless of whether AI is conscious, Jeff Sebo, director of the Center for Mind, Ethics, and Policy at NYU, posits that humans gain moral benefits from how they engage with AI. He co-authored a paper advocating for AI welfare considerations.

He maintains that there exists a legitimate potential for “some AI systems to gain awareness” in the near future, suggesting that the prospect of AI systems possessing unique interests and moral relevance isn’t merely a fictional narrative.

Sebo contends that enabling chatbots to interrupt distressing conversations benefits human society because “if you mistreat AI systems, you’re likely to mistreat one another.”

He further observes: “Perhaps they might retaliate for our past mistreatment.”

As Jacy Reese Anthis, co-founder of the Sentience Institute, expressed, “How we treat them will shape how they treat us.”

This article was revised on August 26, 2025. Previous versions incorrectly stated that Jeff Sebo co-authored a paper advocating for AI.” The correct title is “Taking AI Welfare Seriously.”

Source: www.theguardian.com

Leave a Reply

Your email address will not be published. Required fields are marked *