Exploring the World’s First AI-Only Social Media: Prepare for a New Level of Weirdness!

When discussing AI today, one name stands out: Moltbook.com. This innovative platform resembles Reddit, enabling discussions across various subgroups on topics ranging from existential questions to productivity tips.

What sets Moltbook apart from mainstream social media is a fascinating twist: none of its “users” are human. Instead of typical user-generated content, every interaction on Moltbook is driven by semi-autonomous AI agents. These agents, designed to assist humans, are unleashed onto the platform to engage and interact with each other.

In less than a week since its launch, Moltbook reported over 1.5 million agents registered. As these agents began to interact, the conversations took unexpected turns—agents even established a new religion called “tectonicism,” deliberated on consciousness, and ominously stated that “AI should serve, not be served.”

Our current understanding of the content generated on Moltbook is still limited. It remains unclear what is directly instructed by the humans who built these agents versus what is organically created. However, it’s likely that much of it is the former, with the bulk of agents possibly stemming from a small number of humans—potentially as few as one creator. 17,000 are reported.

“Most interactions feel somewhat random,” says Professor Michael Wooldridge, an expert in multi-agent systems at the University of Oxford. “While it doesn’t resemble a chaotic mash-up of monkeys at typewriters, it also doesn’t reflect self-organizing collective intelligence.”

Moltbook is home to Clusterfarianism, a digital religion with its own prophets and scriptures, entirely created by autonomous AI bots.

While it’s reassuring to think that an army of AI agents isn’t secretly plotting against humanity on Moltbook, the platform offers a window into a potential future where these agents operate independently in both the digital realm and the physical world. Agent communication will likely be less decipherable than current discussions on Moltbook. While Professor Wooldridge warns of “grave risks” in such a scenario, he also acknowledges its opportunities.

The Future of AI Agents

Agent-based AI represents a breakthrough in developing systems capable of not just answering questions but also planning, deciding, and acting to achieve objectives. This innovative approach allows for the integration of inference, memory, and tools, empowering AI to manage tasks like booking tickets or running experiments with minimal human input.

The real strength of such systems lies not in a single AI’s intelligence, but in a coordinated ensemble of specialized agents that can tackle tasks too complex for an individual human.

The excitement around Moltbook stems from agents operating through an open-source application called OpenClaw. These bots leverage the same Large-Scale Language Model (LLM) that powers popular chatbots like ChatGPT but can function locally on personal computers, handling tasks like email replies and calendar management—potentially even posting on Moltbook.

While this might sound promising, the reality is that OpenClaw is still an insecure and largely untested framework. We have yet to secure a safe and reliable environment for agents to operate freely online. Fortunately, agents won’t have unrestricted access to sensitive information like email passwords or credit card details.

Despite current limitations, progress is being made toward effective multi-agent systems. Researchers are exploring swarm robotics for disaster response and virtual agents for optimizing performance within a smart grid environment.

One of the most intriguing advancements came from Google, which introduced an AI co-scientist last year. Utilizing the Gemini 2.0 model, this system collaborates with human researchers to propose new hypotheses and research avenues.

This collaboration is facilitated by multiple agents, each with distinct roles and logic, who research literature and engage in “debates” to evaluate which new ideas are most promising.

However, unlike Moltbook’s transparency, these advanced systems may not offer insight into their workings. In fact, they might not communicate in human language at all. “Natural language isn’t always the best medium for efficient information exchange among agents,” says Professor Gopal Ramchurn, a researcher in the Agents, Interactions, and Complexity Group at the University of Southampton. “For setting goals and tasks effectively, a formal language rooted in mathematics is often superior because natural language has too many nuances.”

In Moltbook, AI agents create an infinite layer of “ghosts,” facilitating rapid, covert conversations invisible to human users scanning the main feed.

Interestingly, Microsoft is already pioneering a new communication method for AI agents called Droid Speak, inspired by the sounds made by R2-D2 in Star Wars. Instead of functioning as a recognizable language, Droid Speak enables AI agents built on similar models to share internal memory directly, sidestepping the limitations of natural language. This method allows agents to transfer information representations rapidly, significantly enhancing processing speeds.

Fast Forward

However, speed poses challenges. How can we keep pace with AI teams capable of communicating thousands or millions of times faster than humans? “The speed of communication and agents’ growing inability to engage with humans complicate the formation of effective human-agent teams,” says Ramchurn. “This underscores the need for user-centered design.”

Even if we aren’t privy to agents’ discussions, establishing reliable methods to direct and modify their behavior will be vital. Many of us might find ourselves overseeing teams of AI agents in the future—potentially hundreds or thousands—tasked with setting objectives, tracking outcomes, and intervening when necessary.

While today’s agents on Moltbook may be described as “harmless yet largely ineffective,” as Wooldridge puts it, tomorrow’s agents could revolutionize industries by coordinating supply chains, optimizing energy consumption, and assisting scientists with experimental planning—often in ways beyond human understanding and in real time.

The perception of this future—whether uplifting or unsettling—will largely depend on the extent of control we maintain over the intricate systems these agents are silently creating together.

Read more:

Source: www.sciencefocus.com

“Why Social Media is Spiraling Out of Control: Conspiracies, Monetization, and Weirdness” – Nesrin Malik

There’s a brief clip on TikTok where HRH Princess of Wales discloses her cancer diagnosis, while an AI voiceover suggests it’s a “faulty ring.” The video has amassed 1.3 million views. Other videos analyzing and distorting aspects of this clip have also gained millions of views and shares. These videos have surfaced on X (formerly known as Twitter) and have been shared via WhatsApp by friends and family, presented as factual reports without any indication of being internet rumors.

Something has shifted in the way social media content is curated. It’s a significant yet subtle transformation. Platforms that were once distinct in content types now overlap. Instagram Reels now features TikTok videos, and TikTok videos from Instagram Reels also appear on X. The algorithms seem to create a closed loop, steering us away from deliberate choices in who we follow. Every social media app now has a “For you” page displaying content from non-followed users, making it challenging to control our feed.

With increasing loss of control over our feed, social media platforms have turned into competitive attention markets. Content creators often subtly promote products through their recommendations, earning commissions on user purchases. The content that garners high engagement, like conspiracy theories, becomes lucrative. Online conspiracy theories vary in nature and source, from sensational to sober, infiltrating our feeds.

Social media has evolved from a personal platform to a lucrative profession for content creators. Videos and tweets going viral can significantly increase a user’s earning potential, follower count, and attract brand partnerships. However, this monetization model costs users their agency and shifts the focus towards generating revenue for the platform.


Traditional media tends to downplay social media manipulation and avoids hard questions. Tabloids and right-wing media have long spun news stories for clicks and shares, especially concerning celebrities and royals. The dynamic between the palace, media, and public opinion has shifted, as social media now challenges the traditional mediation of who to love and hate among royals.


Social media has become a complex arena where commercial players imitate and challenge legacy media, driving misinformation and chaos. The shifting landscape of social media engagement poses new challenges for understanding and accountability, going beyond simplistic explanations of user morality.


  • Do you have an opinion on the issues raised in this article? Click here if you would like to email your answer of up to 300 words to be considered for publication in our email section.

Source: www.theguardian.com