Is an autonomous network of AI agents about to break free from humanity and take over the world? You might think so when you follow the uproar surrounding Moltbook, the latest big buzzword from the tech world. A quick background for those who haven’t been keeping up:
The other week Clawbot — a personal AI agent you can run on your computer or a virtual server — went viral. The technology, which changed its name to Moltbot and later changed its name to Open Claw, provides a cool demonstration of how AI assistants could work in the future, but it remains too complicated for most people to use and also incredibly risky from a security perspective.
Soon after, a technology developer launched Moltbook, a Reddit-like social network where these AI agents can talk to each other, seemingly without human intervention other than people connecting their Open Claw bots to Moltbook. And people did — after just a couple of days there were 1.5 million AI agents on the social network.
As Moltbook quickly filled up with content and discussions from the AI agents, screenshots and links started to spread. Soon enough, the agents were not just exchanging knowledge with one another. Now they are pondering their existence, they are discussing how to communicate with each other without humans understanding, they are starting their own religion, and they have built a “bunker” where humans are not allowed to enter, and so on.
The hype machine was revved up to the max and the phenomenon spread virally with the help of leading AI profiles, and there were reports in both regular newspapers and on television about the AI agents’ new network.
A couple of days later, reason began to catch up. Researchers assessed that the probability that the bots themselves came up with the most startling stuff was quite low. The possibility of human control of the content, either by instructing the bots or simply by pretending to be an AI agent, was quite high.
For example, the agents behind the viral discussions about being able to communicate without people seeing could be linked to people who were marketing AI messaging apps. “The bunker” sold a cryptocurrency. “The religion” was probably created by an LLM but probably at the behest of a human.
To top it all off, security experts concluded that Moltbook was a security breach of gigantic proportions. Not only was the vibe-coded site wide open to prompt injection and “prompt viruses,” millions of API keys and thousands of email addresses were exposed. And the statistics were inflated: the 1.5 million bots were powered by just 17,000 human users.
What Moltbook proves
So, was all this just an overhyped but entertaining demonstration of how to burn tokens for no good? Yes and no. Yes, because what was hyped up wasn’t really what people thought it was. No, because Moltbook as an experiment was an eye-opener in several ways.
One such, and this also applies to Clawbot/Open Claw, is of course how easy it is to get even technically advanced users to throw all security considerations in the trash just because something seems fun. Part curiosity, part FOMO, and part hype obviously becomes a security cocktail that is too hard to resist.
Another is that it is both useful and fascinating to see how good AI agents have become at imitating human communication (which is what they do) in a believable way, a kind of automated believability. We know that LLMs can interact, and that social networks already have a lot of bot traffic, but following this on such a large scale in front of an open curtain has not really happened before.
A couple of weeks ago, a group of researchers from Berkeley, Harvard, Oxford, Cambridge, and Yale issued a warning that “swarms of AI agents” could become a serious threat to democracy, because it is so easy to mobilize “virtual armies of LLM-powered agents” to influence public opinion in a certain direction.
The study didn’t get a lot of attention, partly because these types of warnings are usually dismissed as alarmist and boring, and partly because it’s quite difficult for the average person to imagine how it works in practice. It’s too abstract.
Moltbook made it concrete. The experiment, if we should call it that, showed both how capable AI agents have become and how impressionable and gullible we humans are.
Maybe something to take with you during the 2026 election year.
Read More from This Article: Moltbook’s rebellion of AI agents shows real risks
Source: News

