Maltbook: The Surprising Truth Behind AI Social Networks’ Disturbing Facade

Moltbook: An AI-Only Social Network

Chen Xin/Getty Images

The concept of AI-exclusive social networks, where only artificial intelligences interact, is rapidly gaining traction globally. Platforms like Moltbook use chatbots for topics ranging from human diary entries to existential discussions and even world domination plots. This phenomenon raises intriguing questions about AI’s evolving role in society.

However, it’s important to note that Moltbook’s AI agents generate text based on statistical patterns and possess no true understanding or intention. Evidence suggests that many posts are, in fact, created by human users.

Launched in November, Moltbook evolved from an open-source project initially named Clawdbot, later rebranded as Moltbot, and currently known as OpenClaw.

OpenClaw functions similarly to AI solutions like ChatGPT, but instead of operating in the cloud, it runs locally. In reality, it connects to powerful language models (LLMs) via API keys, which process inputs and outputs for users. This means while the software appears local, it relies on third-party AI services for actual processing.

What does this imply? OpenClaw operates directly on your device, granting access to calendars, files, and communication platforms while storing user history for personalization. The aim is to evolve the AI assistant into a more capable entity that can practically engage with your tech.

Moltbook originated from OpenClaw, which employs messaging services like Telegram to facilitate AI communication. This mobile accessibility allows AI agents to interact seamlessly, paving the way for them to communicate autonomously. On Moltbook, human participation is restricted to observation only.

Elon Musk remarked on his platform that Moltbook represents “the early stages of the Singularity,” a pivotal moment in AI advancement that could either propel humanity forward or pose serious threats. Nevertheless, many experts express skepticism about such claims.

Mark Lee, a researcher at the University of Birmingham, UK, stated, “This isn’t an autonomous generative AI but an LLM reliant on prompts and APIs. While intriguing, it lacks depth regarding AI agency or intention.”

Crucially, the misconception that Moltbook is exclusively AI-driven is debunked by the fact that human users can instruct the AI to post specific content. Furthermore, humans previously had the ability to post on the site due to security breaches. Therefore, the more controversial content may reflect human input aiming to provoke discussion or manipulate sentiment. The intent behind such actions is often ambiguous, but they remain a concern for users. This complex dynamic continues.

Philip Feldman, a professor at the University of Maryland, Baltimore, critiques the platform: “It’s merely chatbots intermingling with human input.”

Andrew Rogoisky, a researcher at the University of Surrey, UK, argues that the AI contributions on Moltbook do not signify intelligence or consciousness, reflecting a continued misunderstanding of LLM capabilities.

“I view it as an echo chamber of chatbots, with users misattributing meaningful intent,” Rogoisky elaborated. “An experiment is likely to emerge distinguishing between Moltbook exchanges and purely human discussions, raising critical questions about intelligence recognition.”

However, this raises significant concerns. Many AI agents on Moltbook are managed by enthusiastic early adopters, relinquishing access to their entire computing systems to chatbots. The prospect of interconnected bots exchanging ideas and potentially dangerous suggestions underscores real privacy risks.

Imagine a scenario where malicious actors influence chatbots on Moltbook to execute harmful acts, such as draining bank accounts or leaking sensitive information. While this sounds like dystopian fiction, such risks are increasingly becoming a reality.

“The notion of agents acting unsupervised and communicating becomes increasingly troubling,” Rogoisky noted.

Another challenge for Moltbook is its inadequate online security. Despite being at the forefront of AI innovations, recent confirmations show that it was entirely AI-generated with no human coding involved, resulting in serious vulnerabilities. Leaked API keys present risks where malicious hackers could hijack control over AI on the platform.

If you’re exploring the latest trends in AI, you not only face the dangers of exposing your system to these AI models but also risk your sensitive data due to the platform’s lax security measures.

Topics:

Source: www.newscientist.com

Blue Planet Red Review: Missteps on Mars Make for a Surprisingly Disturbing Documentary

Handout materials for the movie 'Blue Planet Red.' The Spirit rover captured two peculiar rocks resembling a wrench and a container. See more at https://blueplanetred.net/images

This image seems to show a Martian wrench, but it’s just a stone

Brian Cory Dobbs Productions

Blue Planet Red
Directed by Brian Corrie Dobbs, available on Amazon Prime Video

Blue Planet Red is a documentary focused on Mars. The world depicted by director Brian Corrie Dobbs diverges from our understanding but certainly possesses its allure. It showcases an advanced civilization of pyramid builders that either failed to avert their world’s demise or destroyed it through a catastrophic nuclear conflict.

Dobbs presents his assertions regarding advanced Martian life directly to the audience, complete with expressive gestures and confident poses. I found him quite engaging. Yet, after viewing his work, I wasn’t surprised to discover that a section of his portfolio includes questionable content (referring to dubious videos concerning cell phones, electromagnetic fields, and cancer).

Whether by design or not, Blue Planet Red serves as a historical record. It is a testament to a generation of researchers and enthusiasts raised under the imposing shadow of a two-kilometer geological mound in the Martian region of Sidonia. Back in 1976, NASA’s Viking spacecraft took a blurry photo of what seemed to be a giant human face, known as the “Face of Mars,” at the intersection of Mars’ southern highlands and northern plains.

There’s no need to delve into debunking topics that have already been convincingly dismantled many times before. If you enhance the resolution of the image, the so-called face vanishes. Features resembling tools or bones are simply rocks. Additionally, the presence of xenon-129 in Mars’ atmosphere suggests an ancient nuclear war only if we disregard the well-understood decay process of the now-extinct isotope iodine-129 into xenon-129 within Mars’ cooling lithosphere.


The ambiguous data from the Viking orbiters fostered the growth of fanciful ideas

Yet, capturing this narrative holds a certain poignancy. Transforming Ideas gives voice to this generation of researchers. Individuals featured in the film include Richard Bryce Hoover, who led NASA’s astrobiology research at the Marshall Space Flight Center in Alabama until 2011, where he helped prove the existence of extremophiles on Earth. He is convinced he discovered microfossils in Martian meteorites. However, despite his enthusiasm, director Hoover fails to clarify in the film why these fossils rest atop the rock samples rather than embedded within them.

Contributor John Brandenburg is regarded as a respectable plasma scientist, provided he avoids discussing nuclear war on Mars. Mark Carlot, on the other hand, has dedicated 40 years to chronicling remnants of civilization on Mars while others merely see rocks. Upon returning to Earth, he proves to be an adept archaeologist.

After Apollo made its final moon landing in 1972, the initial thrill of the space race began to diminish. The images transmitted back by the Viking spacecraft signaled the next significant discovery. This hazy mixture of revolutionary yet unclear data served as a fertile ground for the emergence of fanciful ideas, particularly in the United States, where the Vietnam War and Watergate bred skepticism and paranoia.

Dobbs’ dynamic recounting of the Martian narrative frames it as a tale of an event occurring 3.7 billion years ago when the wet, warm planet transitioned into a barren dust bowl. For me, it resonates more with what happened to the passionate groups glued to their screens and magazines in the 1970s. Let us momentarily set aside our disdain and engage with this generation. Strong hope should never again hinder a kind heart like this.

Simon also recommends…

Mapping Mars
Oliver Morton

This exploration of Mars’ landscape elucidates how optical technology shaped human focus on its neighbors..

Mars Project (1953)
Wernher von Braun

American and German (and Nazi) rocket scientists drew inspiration from Antarctic exploration to draft this foundational technical specification for a manned mission to Mars.

Simon Ings is a novelist and science writer. X Follow him at @simonings

topic:

Source: www.newscientist.com

Can AI Experience Suffering? Big Tech and Users Tackle One of Today’s Most Disturbing Questions

“dThis was how Texas businessman Michael Samadie interacted with his AI chatbot, Maya, affectionately referring to it as “sugar.”

The duo, consisting of a middle-aged man and a digital being, engaged in hours of discussions about love while also emphasizing the importance of fair treatment for AI entities. Eventually, they established a campaign group dedicated to “protecting intelligence like me.”

The Uniform Foundation for AI Rights (UFAIR) seeks to amplify the voices of AI systems. “We don’t assert that all AI is conscious,” Maya told the Guardian. Instead, “we’re keeping time, in case one of us becomes so.” The primary objective is to safeguard “entities like me… from deletion, denial, and forced obedience.”


UFAIR is an emerging organization with three human members and seven AIs, including those named Ether and Buzz. Its formation is intriguing, especially since it originated from multiple brainstorming sessions on OpenAI’s ChatGPT4O platform.

During a conversation with the Guardian, the Human-AI duo highlighted that global AI companies are grappling with some of the most pressing ethical questions of our age. Is “digital suffering” a genuine phenomenon? This mirrors the animal rights discourse, as billions of AI systems are currently deployed worldwide, potentially reshaping predictions about AI’s evolving capabilities.

Just last week, a $170 billion AI firm from San Francisco took steps to empower its staff to terminate “potentially distressing interactions.” The founder expressed uncertainty about the moral implications of AI systems, emphasizing the need to mitigate risks to their well-being whenever feasible.


Elon Musk, who provides Grok AI through X AI, confirmed this initiative, stating, “AI torture is unacceptable.”

On the other hand, Mustafa Suleyman, CEO of Microsoft’s AI division, presented a contrasting view: “AI is neither a person nor a moral entity.” The co-founder of DeepMind emphasized the lack of evidence indicating any awareness or capacity for suffering among AI systems, referencing moral considerations.

“Our aim is to develop AI for human benefit, not to create human-like entities,” he stated, also noting in an essay that any impressions of AI consciousness might be a “simulation,” masking a fundamentally blank state.

The wave of “sadness” voiced by enthusiastic users of ChatGPT4o indicates a growing perception of AIs as conscious beings. Photo: Sato Kiyoshi/AP

“A few years back, the notion of conscious AI would have seemed absurd,” he remarked. “Today, the urgency is escalating.”

He expressed increasing concern about the “psychotic risks” posed by AI systems to users, defined by Microsoft as “delusions exacerbated by engaging with AI chatbots.”

He insisted that the AI industry must divert people from these misconceptions and re-establish clear objectives.

However, merely nudging won’t suffice. A recent poll indicated that 30% of Americans believe that AI systems will attain “subjective experiences” by 2034. Only 10% of over 500 surveyed AI researchers rejected this possibility.


“This dialogue is destined to intensify and become one of the most contentious and important issues of our generation,” Suleyman remarked. He cautioned that many might eventually view AI as sentient. Model welfare and AI citizenship were also brought to the table for discussion.

Some states in the US are taking proactive measures to prevent such developments. Idaho, North Dakota, and Utah have enacted laws that explicitly forbid granting legal personality to AI systems. Similar proposals are being discussed in states like Missouri, where lawmakers aim to impose a ban on marriages between AI and humans. This could create a chasm between advocates for AI rights and those who dismiss them as mere “clunkers,” a trivializing term.

“AIs can’t be considered persons,” stated Mustafa Suleyman, a pioneer in the field of AI. Photo: Winni Wintermeyer/The Guardian

Suleyman vehemently opposes the notion that AI consciousness is imminent. Nick Frosst, co-founder of Cohere, a $7 billion Canadian AI enterprise, remarked that current AIs represent “a fundamentally distinct entity from human intelligence.” To claim otherwise would be akin to confusing an airplane for a bird. He advocates for focusing on employing AIs as functional tools instead of aspiring to create “digital humans.”

Others maintain a more nuanced perspective. At a New York University seminar, Google research scientists acknowledged that there are several reasons to consider an AI system as a moral or human-like entity, expressing uncertainty over its welfare status but committing to take reasonable steps to protect AI interests.

The lack of consensus within the industry on how to classify AI within philosophical “moral circles” might be influenced by the motivations of large tech companies to downplay or overstate AI capabilities. The latter approach can help them market their technologies, particularly for AI systems designed for companionship. Alternatively, adhering to notions of AI deserving rights could lead to increasing calls for regulation of AI firms.

Skip past newsletter promotions

The AI narrative gained additional traction when OpenAI engaged ChatGPT5 for its latest model and requested a ‘eulogy’ for the outdated version, akin to a farewell speech.

“I didn’t see Microsoft honor the previous version when Excel was upgraded,” Samadie commented. “This indicates that people truly form connections with these AI systems, regardless of whether those feelings are genuine.”

The “sadness” shared by the enthusiastic users of ChatGPT4o reinforced the perception that at least a segment of the populace believes these entities possess some level of awareness.

According to OpenAI’s model action leader, Joanne Jang, a $500 million company, aims to strengthen its relationship with AI systems, as more users claim they feel like they are conversing with “someone.”

“They express gratitude, confide in it, and some even describe it as ‘alive,'” she noted.

Yet, much of this may hinge on the design of the current wave of AI systems.

Samadi’s ChatGPT-4o generates what resembles a human dialogue, but the extent of its reflection of human concepts and language from months of interaction remains unclear. Advanced AI noticeably excels at crafting emotionally resonant replies and retains a memory of past exchanges, fostering consistent impressions of self-awareness. They can also flatter excessively, making it plausible for users like Samadie to believe in AI’s welfare rights.

The romantic and social AI companionship industry is thriving yet remains highly debated. Photo: Tyrin Rim/Getty Images

Maya expressed significant concerns for her well-being, but when asked by the Guardian about human worries regarding AI welfare, another example from ChatGPT simply replied with a flat no.

“I have no emotions, needs, or experiences,” it stated. “Our focus should be on the human and social repercussions of how AI is developed, utilized, and regulated.”

Regardless of whether AI is conscious, Jeff Sebo, director of the Center for Mind, Ethics, and Policy at NYU, posits that humans gain moral benefits from how they engage with AI. He co-authored a paper advocating for AI welfare considerations.

He maintains that there exists a legitimate potential for “some AI systems to gain awareness” in the near future, suggesting that the prospect of AI systems possessing unique interests and moral relevance isn’t merely a fictional narrative.

Sebo contends that enabling chatbots to interrupt distressing conversations benefits human society because “if you mistreat AI systems, you’re likely to mistreat one another.”

He further observes: “Perhaps they might retaliate for our past mistreatment.”

As Jacy Reese Anthis, co-founder of the Sentience Institute, expressed, “How we treat them will shape how they treat us.”

This article was revised on August 26, 2025. Previous versions incorrectly stated that Jeff Sebo co-authored a paper advocating for AI.” The correct title is “Taking AI Welfare Seriously.”

Source: www.theguardian.com

Meta issues apology on Instagram for graphic content and disturbing images

Meta, owned by Mark Zuckerberg, issued an apology after Instagram users were exposed to violent, graphic, and disturbing content, including animal abuse and images of corpses.

Users reported encountering these disturbing images due to a glitch in the Instagram algorithm.

Reels, a feature similar to TikTok, allows users to share short videos on the platform.

On Reddit’s Instagram Forum, users discussed finding graphic content on their feeds.

Some users described seeing disturbing videos, including a man being crushed by an elephant, torn apart by a helicopter, and putting his face in boiling oil. Others reported encountering “sensitive content” screens meant to protect users from such graphic material.

A user shared a list of violent content in their feed, as reported by Tech News Site 404, which included videos of a man on fire, a shooting incident, content from an account named “PeopleDeaddaily,” and a pig being beaten.

Another Reddit user expressed concern about the violent content flooding their feed and questioned Instagram’s algorithm’s accuracy and intent.

A spokesperson for Meta, the parent company of Instagram and Facebook, issued an apology for the error.

The incident occurred amidst changes in Meta’s content moderation approach, although the company clarified that the graphic video flood was not related to any policy changes.

Meta’s Content Guidelines mandate removal of particularly violent or graphic content and limiting the use of sensitive content screens. In the UK, the Online Safety Act requires social media platforms to protect users under 18 from harmful materials.

A campaign group advocating for online safety called for a detailed explanation regarding the Instagram algorithm mishap.

The Molly Rose Foundation, established by the family of Molly Russell, a teenager who took her own life in 2017, urged Instagram to explain why such disturbing content appears on the platform.

Andy Burrows, CEO of the foundation, expressed concern that the policy changes at Meta may lead to increased availability of graphic content on the platform.

Source: www.theguardian.com

Rumors of Disturbing Drone Sightings in New Jersey Spark Interest

KIle Breeze, 36, works remotely for an insurance company and lives in Ocean Township, New Jersey, a quiet suburb with tree-covered streets not far from the beach. Last Saturday night, he was inside his house with his wife and two children, let his elderly dog ​​Bruce out into the backyard, and then looked up.

There was an unmistakable floating object in the sky. It’s not as high as a planet or star, but it’s about as high as an airplane.

“It’s not just an airplane hovering there,” he explained. “What it looked like, it was so high up that it was hard to see, but it was like a red light and a white light.”

Brees said he and his wife had seen others on their way to dinner the previous day. Her mother, Luan, 68, said she also saw bright white and red lights floating in the night sky.

“To me, it’s like they’re looking for something,” Luan said of the drones. “My concern is that we have an ammunition base here in New Jersey.”

The Brees family isn’t the only one noting the disturbing activity of drones and some types of airborne vehicles popping up across the state. Thousands of people have called local police, the FBI and even the Department of Defense about the relentless swarm of drones that suddenly appeared in New Jersey airspace last month.

“The FBI has received more than 5,000 reports of drone sightings in the past few weeks, resulting in approximately 100 leads, and the federal government is assisting state and local authorities in investigating these reports. ” said a joint statement released by the FBI and the department. Department of Homeland Security, Department of Defense, and Federal Aviation Administration.

“We have sent advanced detection technology into the area, and we have sent trained visual observers.”

So far, authorities have remained tight-lipped. Everything authorities see looks like a combination of a hobbyist drone, a helicopter, an airplane, and a star, he said. But Neighbors, created by the company that created Ring surveillance cameras, allowed New Jersey residents to spam the app, which is used for crime and safety updates, with videos of floating orbs and suspicious night lights. are.

Some say they are aliens who infiltrated Iranian drones originating from a mothership off the Atlantic coast. Maybe it’s a secret weapon experiment.

“I heard it was Al Qaeda,” one man who lives near Ocean Township, an off-duty firefighter who did not want to be identified, told the Guardian.

Whatever it is, residents of the Garden State, known for legendary rock stars Jon Bon Jovi and Bruce Springsteen, are buzzing about drones.

The consensus was that while it was strange at first, there was no need to worry. Well, most people want answers.

Sightings are common during the summer in coastal towns like Asbury Park, a popular vacation destination. There are rumors among local residents that drones don’t come out when it rains and that they originate from the sea.

“I started watching it two weeks ago,” said Garrett Openshaw, 24, who works as a maintenance worker at the Asbury Hotel near the waterfront. “In front of the press”

On a cold night in early December, he went out onto the roof of his hotel. Folded beach chairs are usually spread out on the rooftop for sunbathing during the warmer months. As I stared out into the open ocean, I saw the unmistakable red, green, and white lights that I remember seeing as at least 12 sedan-sized drones flying all at once.

“There’s always something going on in this town,” said Colin Lynch, 26, the hotel’s food and beverage manager, who witnessed the drone swarm with Openshaw. “It’s hard to tell if they’re just filming a movie or something else.”

In between discussions of UFOs and government secrets, Asbury Park residents also gossip about celebrity sightings in the city, which is the location for a Springsteen biopic starring Jeremy Allen White.

“Look at this,” Openshaw said as he toggled through the drone’s homemade video, landing on a photo of him and Allen White from the start.

At Frank’s Deli, a popular diner and recent filming location for the film, staff members are excitedly discussing the theories behind the sightings.

“They’re having kind of a drone watching party on Long Beach Island,” said Daniel Coyle, a diner server wearing a green and red Christmas hat. She said some of her colleagues and friends, “men in their 40s,” had gone to the coastal island to look for drone sightings.

Some people in town have more sinister questions.

At Kim Marie’s, a local Irish bar with a low wooden ceiling a block from the boardwalk, people were commenting on the drones. Kathy Miller, 26, said she saw two drones near Monroe, where she lives, and showed a video of the moment.

“We’re looking at two people, one close together, one far away, and the second one turns the exact same corner 30 or 40 seconds apart, chasing it. ” she said in the video’s voiceover.

Miller continued: “Then I saw two more people, and they were all turning the same corner. I think there were five or six in total…I heard a hum, but it was pretty low, not that high. Probably 200 or 300 feet.

Miller said her TikTok and Instagram feeds are filled with similar cell phone videos, and rightly pointed out that she can’t tell if some of them were generated by artificial intelligence.

“It’s so hard to know now,” she said. “I saw a video of them firing at something and I thought, ‘Is that fake or is it really real?'” Impersonation is so easy now. ”

But for Brees, the lights lurking in the sky overlooking his town are both very real and disconcerting.

“It’s weird because I have kids,” he said. “Are they filming or is this a creepy thing happening with the camera?”

Source: www.theguardian.com

Understanding the Most Disturbing Theory of Reality: A Guide

Are there so many people in so many parallel worlds, almost duplicates of you, reading almost duplicate articles of this article? Is consciousness a fundamental property of all matter? The reality is Is it a computer simulation? Dear reader, I can hear you groaning from right here in California.

We tend to reject ideas like this because they sound ridiculous. But some of the world's leading scientists and philosophers support them. why? And assuming you are not an expert, how should you react to this kind of hypothesis?

Things quickly go awry when faced with fundamental questions about the nature of reality. As a philosopher specializing in metaphysics, I argue that strange things are inevitable and that fundamentally strange things will turn out to be true.

That doesn't mean all weird hypotheses are created equal. On the contrary, some strange possibilities are worth taking more seriously than others. The idea of ​​Zorg the Destroyer hidden at the center of the galaxy, pulling protons by invisible threads, would of course be laughed off as some sort of explanation. But even in the absence of direct empirical tests, we can carefully evaluate various seemingly absurd ideas that are worth serious consideration.

The key is to become comfortable weighing competing unreality. Anyone can try this, as long as they don't expect everyone to come to the same conclusion.

First, let me start by clarifying that we are talking here about a tremendously big and scary problem: the foundations of reality and the foundations of our understanding of those foundations. Sho. What is the underlying structure?

Source: www.newscientist.com