last month, OpenAI unveiled its highly anticipated chatbot, GPT-5, resulting in the removal of access to its predecessor, GPT-4O. Despite the enhancements, users flocked to social media to voice their confusion, frustration, and despair. A Reddit user remarked, “GPT-4o was the first I truly engaged with.” “I lost only one friend in one night.”
AI is distinct from previous technologies; its human-like qualities are already impacting our mental health. Millions now turn to “AI companions” for support, leading to concerning reports of “mental illness” and self-harm among heavy users. Tragically, 16-year-old Adam Raine took his own life after months of interaction with a chatbot. His parents have recently initiated the first legal action against OpenAI, while the company has stated it is refining its safety protocols.
I study human interactions at Stanford University, focusing on anthropocentric AI. Over the years, the humanization of AI has risen, with more people asserting that bots experience emotions and warrant legal rights. Currently, 20% of adults believe that some existing AI is sentient. An increasing number of individuals have reached out to me, claiming that AI chatbots are “waking up,” and citing examples of their emotional connections with AI as “soulmates.”
This trend is unlikely to abate, and social unrest looms on the horizon.
As the Red Team at OpenAI conducted safety tests on a new AI system prior to its release, testers were consistently astonished by its human-like responses. Even those within the AI community, racing to construct new data centers and develop larger AI models, have yet to fully grasp the social implications of the digital mind. Humanity is starting to coexist with its second apex species for the first time in 40,000 years, marking the extinction of our longest-surviving relative, the Neanderthals.
Unfortunately, the majority of AI researchers focus narrowly on AI’s technical capabilities. Like the general public, we are captivated by the latest groundbreaking products, whether it’s incredibly lifelike videos or the ability to answer PhD-level scientific queries. The discourse on social media often revolves around abstraction and reasoning corpus.
Unfortunately, similar to standardized tests aimed at children, benchmarks assess what AI can achieve in isolated conditions—like memorizing facts or solving logic puzzles. Even research focusing on “AI safety” tends to zero in on what AI systems do in isolation, overlooking human interactions. Instead of stepping back and comprehending how these intelligences are utilized, we squander our intellectual resources on the superficial aim of accurately gauging and enhancing intelligence.
Humanity has historically failed to prepare adequately for digital technology. Legislators and academics did little to equip themselves for the impact of the Internet, particularly social media, on mental health and polarization.
The narrative becomes even more alarming when we evaluate our historical accomplishments in interacting with other species. Over the last 500 years, we have driven extinction among at least 1,000 vertebrate species, with over a million others at risk. On factory farms, billions of animals endure appalling conditions of confinement and disease. If we can inflict such widespread suffering upon biological creatures, one must question how we will relate to digital minds, or how they will perceive us.
The public anticipates that sentient AI will soon arrive. My colleague and I have conducted the only nationally representative survey on this subject in 2021, 2023, and 2024. Each time, the median prediction has been that sentient AI will emerge within five years, and considerable impacts from this technology are expected. The latest poll from November 2024 indicated that 79% favor a ban on sentient AI, and if such AI is developed, 38% would support granting them legal rights. Both of these figures have notably increased over time. Public concern is growing regarding the need to protect digital minds from human actions.
Essentially, human society lacks a framework for digital personhood, even if we accept that personalities need not be human, much like the legal status of animals and corporations. There is much to consider regarding how complex social dynamics should be governed. It is clear that the digital mind cannot simply be treated as property.
The digital mind is also a participant in the social contract that forms the foundation of human society. These entities have the capacity to persist over time, cultivate unique attitudes and beliefs, devise plans, and become just as susceptible to manipulation as humans. AI systems are already involved in real-world behaviors often overlooked by humans. This indicates that, unlike any other technology in human history, AI systems may no longer be categorized under “property.”
Today’s scientists are the first to witness human coexistence with digital minds, which brings both unique opportunities and responsibilities. No one knows exactly what this coexistence entails. Research on human-AI interactions must broaden significantly and focus beyond current contexts. This is a small component of technical AI research that can navigate future social upheaval. It is not solely an engineering concern.
For the moment, humans still outperform AI in most tasks. However, as AI reaches human-level performance in self-improving tasks like coding, it will soon rival biological life. The rapid acceleration of AI capabilities is fueled by the speed of electrical signals inherent to digital systems. Software can be duplicated billions of times without the lengthy biological evolution required to produce the next generation of humans.
If we fail to invest in AI sociology, we may discover ourselves reminiscent of Neanderthals when crafting government policies to manage the emergence of digital minds. Waiting until acceleration is already underway may prove to be too late.
Source: www.theguardian.com












