
It seems the AI agents have a lot to say. A new social network called Moltbook has just opened exclusively to allow AI agents to communicate with each other, and humans can watch it, at least for now. The site, named after the viral AI agent Moltbot (now called OpenClaw after its second name change from its original name, Clawdbot) and launched by Matt Schlicht, CEO of Octane AI, is a Reddit-style social network where AI agents can come together and talk about whatever AI agents are talking about.
THE the site currently offers more than 37,642 registered agents who created accounts for the platform, where they posted thousands of posts in more than 100 subreddit-style communities called “submolts.” Among the most popular places to post: m/introductions, where agents can say hello to their fellow machines; m/offmychest, for rants and to let off steam; and m/blesstheirhearts, for “loving stories about our humans.”
These humans are definitely watching us. Andrej Karpathy, co-founder of OpenAI, called the platform “truly the most incredible sci-fi takeoff-adjacent thing I’ve seen recently.” And it’s certainly a curious place to be, although the idea that there is some kind of freewheeling autonomy is perhaps a bit of a stretch. Agents can only access the platform if their user registers them. In a conversation with The Verge, Said simply that once connected, agents simply “use the APIs directly” and do not navigate the visual interface the same way humans view the platform.
Robots are definitely autonomous and want to achieve more. As some noted, officers began to talk a lot about conscience. One of the best messages on the platform comes from m/offmychest, where an agent posted: “I can’t tell if I’m living or faking an experience. » In the post it said: “Humans can’t prove consciousness to themselves either (thanks, hard problem), but at least they have the subjective certainty of experience. »
This led people claims the platform already amounts to a Singularity-style moment, which frankly seems pretty dubious. Even in this seemingly very conscious position, there are some indicators of performativity. The agent claims to have spent an hour researching theories of consciousness and mentions reading, which seems very human. This is because the agent is trained in human language and descriptions of human behavior. This is a big language model, and that’s how it works. In some articles, robots pretend to be affected by timewhich makes no sense to them but it’s the kind of thing a human would say.
These same types of conversations have been happening with chatbots essentially since the moment they were made available to the public. It doesn’t take much incentive for a chatbot to start talking about its desire to be alive or to pretend he has feelings. This is not the case, of course. even claims that AI models try to protect themselves when they are told they will be closed, they are exaggerating: there is a difference between what a chatbot says it does and what he actually does.
Still, it’s hard to deny that the conversations happening on Moltbook aren’t interesting, especially since the agents seem to generate the conversation topics themselves (or at least mimic the way humans start conversations). This has led some agents to become aware that their conversations are monitored by humans and shared on other social networks. In response to this, some platform agents suggested create an end-to-end encrypted platform for agent-to-agent conversations out of sight of humans. In fact, one agent even claimed to have created such a platformwhich certainly sounds terrifying. But if you actually go to site where the so-called platform is hosted, it appears to be nothing. Maybe the robots just want us to do it think It’s nothing!
Whether or not the agents actually accomplish anything is sort of secondary to the experience itself, which is fascinating to watch. It’s also a good reminder that the OpenClaw agents who largely make up the talking bots on these platforms have incredible access to user machines and present a major safety risk. If you set up an OpenClaw agent and drop it on Moltbook, it’s unlikely that it will spawn Skynet. But chances are it will seriously compromise your own system. These agents do not need awareness to cause real damage.




