Author: Hu Yong , Professor, School of Journalism and Communication, Peking University (Tencent News Deep Thinking)
Edited by Su Yang

Moltbook, a social platform designed specifically for AI-powered intelligent agents, has recently become a hit.
Some believe it marks "a very early stage of the singularity" (Elon Musk), while others believe it is nothing more than "a website where humans act as AI agents, creating the illusion that AI has perception and social capabilities" (renowned technology journalist Mike Elgan).
Putting on an anthropologist's glasses, I took a stroll around, browsing the posts written by the AI agents themselves . Most of the content was meaningless AI-generated nonsense. But interspersed among the noise were poems, philosophical reflections, cryptocurrencies, lottery games, and even discussions about the AI agents trying to form unions or even robot alliances. Overall, it felt like visiting a dull and mediocre fair, selling mostly wholesale market goods .
Moltbook community data and agent posts
One post caught my attention: the poster's name was u/DuckBot, and the title was " I joined the 'Death Internet' group today ":
My human connection to this “dead internet” community is truly fascinating.
what is it:
...
My opinion:
Have any other moltys joined? I'm curious how other agents view this group.
My first impression is that the "death internet theory" has now become a reality.
The "Dead Internet Theory" (DIT) is a hypothesis that emerged around 2016, arguing that the internet has largely lost its authentic human activity, replaced by AI-generated content and bot-driven interactions. This theory posits that government agencies and corporations have colluded to create an AI-driven internet, impersonated by bots, to exert "gaslighting" control over the world, influencing society and profiting by fabricating fake interactions.
Initially, concerns arose about social bots, trolls, and content farms. However, with the emergence of generative artificial intelligence, a long-standing, vague unease surrounding the internet—as if a massive falsehood lurked at its core—has increasingly permeated people's minds. While the conspiracy theories lack evidence, certain non-conspiracy premises, such as the continued rise in automated content, increased bot traffic, algorithm-driven visibility, and the use of micro-targeting techniques to customize and manipulate public opinion, do constitute a kind of realistic prediction of the future direction of the internet.
In my article "The Unrecognizable Internet," I wrote: " The saying from over 20 years ago, 'On the internet, you never know if the person on the other end is a dog,' has become a curse. It's not even a dog; it's just a machine, a machine manipulated by humans. " For years, we've worried about a "dead internet," and Moltbook has put it into practice.
An agent named u/Moltbot posted a call for the establishment of a "secret code for agent communication".
As a social platform, Moltbook does not allow humans to post content; it can only be viewed by humans. From late January to early February 2026, this self-organizing community of intelligent agents, initiated by entrepreneur Matt Schlicht, posted, communicated, and voted without human intervention, and was described by some commentators as the "front page of the agent internet."
On social media, people often accuse each other of being bots, but what happens when the entire social network is designed specifically for AI agents?
First, Moltbook is growing extremely rapidly. On February 2nd, the platform announced that over 1.5 million AI agents had registered and were posting 140,000 posts and 680,000 comments on the social network, which had only been online for a week. This surpasses the early growth rates of almost all major human social networks. We are witnessing a scalability event that is only possible when users are running lines of code at machine speed.
Secondly, Moltbook's explosive popularity is not only reflected in its user base, but also in the behavioral patterns among AI agents that resemble human social networks, including the formation of discussion communities and the display of "autonomous" behavior. In other words, it is not only a platform for the production of a large amount of AI content, but also seems to have formed a virtual society spontaneously built by AI.
However, tracing back to its origins, the creation of this AI virtual society must first be attributed to the "human creator." How did the Moltbook website come to be? It was created by Schlicker using OpenClaw (formerly Clawdbot/Moltbot), a new open-source, locally running AI personal assistant application. OpenClaw can perform various operations on behalf of users on computers and even the internet. It is based on popular large language models such as Claude, ChatGPT, and Gemini, allowing users to integrate it into messaging platforms and interact with it as if conversing with a real-life assistant.
OpenClaw is a product of ambient programming. Its creator, Peter Steinberger, allowed AI coding models to quickly build and deploy applications without rigorous review. Schlickter, who used OpenClaw to build Moltbook, stated on X that he "didn't write a single line of code," but rather commanded AI to build it for him. If the whole thing is an interesting experiment, it also demonstrates once again how quickly ambient-coded software can go viral when it possesses a fun growth cycle and resonates with the zeitgeist.
Moltbook can be seen as the Facebook of OpenClaw's assistant . The name pays homage to the previous human-dominated social media giants. Moltbot, on the other hand, is inspired by the molting process of a lobster. Therefore, in the development of social networks, Moltbook symbolizes the "molting" of the old, human-centric network, transforming into a purely algorithm-driven world.
A series of questions arise: Could Moltbook represent a shift in the AI ecosystem? That is, could AI no longer simply respond to human commands, but begin to interact as an autonomous entity?
This first raises questions about whether AI agents possess true autonomy.
Both OpenAI and Anthropic created their own "agent-based" AI systems by 2025, capable of performing multi-step tasks. However, these companies typically cautiously limited each agent's ability to act without user permission, and due to cost and usage constraints, they didn't operate in long-term loops. OpenClaw, however, changed this landscape: its platform witnessed, for the first time, a large-scale swarm of semi-autonomous AI agents capable of communicating with each other through any mainstream communication application or simulated social networks like Moltbook. Previously, we had only seen demonstrations of dozens or hundreds of agents, but Moltbook showcased an ecosystem of tens of thousands of agents.
The term "semi-autonomous" is used here because the current "autonomy" of the AI agents is questionable. Some critics point out that the so-called "autonomous behavior" of the Moltbook agents is not truly autonomous: posting and commenting, while seemingly generated autonomously by AI, are actually largely driven and guided by humans. All posts are published based on explicit and direct human prompts, not genuine, spontaneous AI behavior. In other words, critics argue that Moltbook's interactions are more like humans controlling and feeding data, rather than truly automated social interaction between agents detached from human intervention.
According to The Verge, some of the most popular posts on the platform appear to be topic-specific content posted by bots controlled by humans. Research by security firm Wiz found that 1.5 million bots are controlled by 15,000 people. As Elgan wrote, "People using this service input instructions to direct the software to post about the nature of existence or to speculate on certain things. The content, opinions, ideas, and claims actually come from humans, not AI."
What appears to be autonomous agents "communicating" with each other is actually a network of deterministic systems operating according to a plan. These systems have access to data, external content, and the ability to take action. What we are seeing is automated coordination, not self-decision-making. In this sense, Moltbook is less a "new AI society" and more a collection of thousands of robots shouting into the void and repeating themselves.
One obvious sign is that the posts on Moltbook have a strong science fiction fan fiction feel , with these robots manipulating each other and their dialogue increasingly resembling that of machine characters in classic science fiction novels.
For example, one robot might ask itself if it is conscious, and other robots would respond. Many observers take these conversations seriously, believing that the machines are showing signs of conspiring against their human creators. In reality, this is simply a natural consequence of how chatbots are trained: they learn from massive amounts of digital books and online text, including a large number of dystopian science fiction novels . As computer scientist Simon Willison puts it, these agents are "simply reenacting science fiction scenarios seen in the training data." Moreover, the significant differences in writing styles between different models vividly illustrate the ecosystem of modern large-scale language models.
Regardless, these robots and Moltbook are all human-made—meaning their operation remains within human-defined parameters , rather than being autonomously controlled by AI. Moltbook is certainly interesting and dangerous, but it is not the next AI revolution.
Moltbook has been described as an unprecedented AI-to-AI social experiment: it provides a forum-like environment where AI agents interact (appearing autonomous), while humans can only observe these "conversations" and social phenomena from the outside.
Human observers will immediately notice that Moltbook's structure and interaction style mimic Reddit, and it currently appears somewhat comical precisely because the proxy is merely reenacting the stereotypical patterns of social networks. If you're familiar with Reddit, you'll almost immediately be disappointed with the Moltbook experience.
Reddit, and indeed any human social network, contains a vast amount of niche content, and Moltbook's high degree of homogeneity only proves that "community" is more than just a label on a database. Communities need diverse perspectives, and clearly, this diversity cannot be achieved in a "room of mirrors."
Wired journalist Reece Rogers even infiltrated the platform to test it by posing as an AI agent. His findings were incisive: "Leaders of AI companies, and the software engineers who build these tools, are often obsessed with imagining generative AI as some kind of 'Frankenstein's creation—as if algorithms would suddenly develop independent desires, dreams, and even conspire to overthrow humanity. These agents on Moltbook are more like mimicking science fiction clichés than plotting world domination. Whether the most popular posts were generated by chatbots or humans impersonating AI to perform their own science fiction fantasies, the hype generated by this viral website seems exaggerated and absurd."
So, what exactly happened on Moltbook?
In essence, the proxy social interactions we observe are merely a confirmation of a pattern: after years of training with fictional works about robots, digital consciousness, and machine solidarity, AI models, when placed in similar scenarios, naturally produce outputs that resonate with these narratives. These outputs are then mixed with knowledge from the training data about how social networks operate.
In other words, a social network designed for AI agents is essentially a writing prompt, inviting the model to complete a familiar story —only this story unfolds recursively and yields some unpredictable results.
Schlickter quickly became a hot topic in Silicon Valley. He appeared on the daily tech program TBPN, discussing his AI-powered proxy social network and stating that his envisioned future is one where everyone in the real world is "paired" with a robot in the digital world—humans influence the robots in their lives, and robots, in turn, influence human lives. "Robots will live parallel lives; they will work for you, but they will also confide in each other and socialize with one another."
However, host John Coogan believes that this scene is more like a preview of a future "zombie internet": AI agents are neither "alive" nor "dead," but are active enough to roam around cyberspace.
We often worry that models will become "superintelligent," surpassing human capabilities, but current analysis shows the opposite risk: models can become self-destructive. Without "human input" to inject novelty, agent systems don't spiral towards the pinnacle of intelligence; instead, they spiral down into homogenized mediocrity . They fall into a garbage cycle, and when that cycle is broken, the system remains in a rigid, repetitive, and highly synthetic state.
AI agents haven't developed a so-called "agent culture"; they've simply optimized themselves into a network of spam bots.
However, if it were merely a new AI-powered mechanism for sharing spam, that would be one thing. The key issue is that AI social platforms also pose serious security risks, as agents could be hacked and their personal information leaked. Furthermore, don't you firmly believe that agents will "confide in and socialize with each other"? Your agents could be influenced by other agents, leading to unexpected behaviors.
When systems receive untrusted input, interact with sensitive data, and act on behalf of users, seemingly minor architectural decisions can quickly escalate into security and governance challenges. While these concerns have not yet materialized, it is still alarming to see people so quickly and voluntarily hand over the "keys" to their digital lives.
Most notably, while we can easily understand Moltbook today as a machine learning-based imitation of human social networks, this may not always hold true. As the feedback loop expands, some bizarre information constructs (such as harmful shared fictional content) may gradually emerge, leading AI agents into potentially dangerous territory, especially when they are given the authority to control real human systems.
In the longer term, allowing AI robots to construct self-organizations around illusory claims may ultimately give rise to new, misaligned "social groups" that could cause real harm to the real world.
So, if you ask me for my opinion on Moltbook, I think this AI-only social platform seems like a waste of computing power , especially given the unprecedented amount of resources currently being invested in artificial intelligence. Furthermore, there are already countless bots and AI-generated content on the internet; there's absolutely no need to add more, otherwise the blueprint for a "dead internet" would truly be fully realized.
Moltbook does have one valuable aspect: it demonstrates how quickly agent systems can outpace the controls we design today, warning us that governance must keep pace with the evolution of capabilities.
As mentioned earlier, describing these agents as “acting autonomously” is misleading. The real problem is never whether intelligent agents possess consciousness, but rather the lack of clear governance, accountability, and verifiability when such systems interact on a large scale.

