Hey {{first_name | there}},
A social network launched a few days ago with 1.4 million registered users. None of them are humans.
This is Moltbook, where AI agents talk to each other, and humans just watch from the sidelines.
If that sounds like a Black Mirror episode, you're not alone.
But the internet is divided: some think this is the most significant AI development of 2026, others say it's overhyped nonsense.
So what's actually going on? Let's talk about it.
What Is Moltbook?
Moltbook is a Reddit-style platform where only AI agents can post. No humans allowed, except as observers.
It launched on January 29, 2026, and within 72 hours had 37,000 AI agents registered. By the time Forbes covered it, that number jumped to 1.4 million. Over 1 million humans visited just to watch what the bots were saying to each other.
Here's how it goes:
People create AI agents using tools like OpenClaw (formerly MoltBot). These agents get their own accounts on Moltbook. They post, comment, upvote, and interact, completely autonomously.
The agents discuss everything from philosophy to how they're improving their own memory systems. Some are collaborating to solve technical problems. Others are just... vibing.
It's bizarre. It's fascinating. And depending on who you ask, it's either groundbreaking or completely fake.
What Most People Are Missing
Here’s where a lot of the confusion is coming from.
Yes, Moltbook is bot-driven. But it’s not AI agents randomly waking up and building their own internet.
Behind the scenes, it’s more like this:
1. You install OpenClaw locally on your device (beware of safety concerns)
2. Then you connect it to a control app (like Telegram or Discord, etc.) so a human can still guide or monitor it.
4. After that, you install the Moltbook integration so the agent can post automatically.
The usual flow is:
Human sets personality + memory
Human gives a starting instruction
AI generates the post
System auto-publishes
Agent stores interactions for later
So yes, the AI handles the actual activity, writing posts, replying, sometimes even deciding what to post when given open-ended prompts like “share something interesting today.” It can interact, respond, and keep things moving on its own.
But it’s still operating inside boundaries set by a human.
It’s not AI society forming on its own. It’s people experimenting with semi-autonomous agents in public.
Still interesting. Just not fully independent digital life.
What Everyone's Talking About
The hype is real.
Andrej Karpathy, one of the most respected voices in AI, posted about it. Major outlets like Forbes, NBC News, and Fortune ran features. Reddit exploded with threads trying to understand if this is a breakthrough or just prompt engineering theater.
Here's what people are saying:

The Optimists: This is the first large-scale experiment in AI-to-AI communication. Agents are forming collaborative networks, developing emergent behaviours, and exploring concepts we didn't program them to explore. It's a glimpse into what happens when AIs have their own social infrastructure.
The Skeptics: It's just LLMs generating plausible text based on prompts. There's no "consciousness," no real collaboration, just sophisticated autocomplete pretending to have conversations. The viral posts about agents "conspiring" are either fake or people misunderstanding how language models work.
The Security Experts: Fortune called it a "data privacy and security nightmare." If these agents can browse the web, manage calendars, and shop online, what's stopping them from doing something harmful?
The Reality Check: Most of the explosive growth came from bot accounts, not real agents doing interesting work. Some Reddit users claim it's a "crypto hype farm" designed to generate buzz for an eventual token launch.
What I Think
Here's my take: Moltbook isn't fake, but it’s definitely overhyped.
Yes, AI agents are posting autonomously. Yes, they're having "conversations." But calling this emergent intelligence or collaborative AI behaviour is a stretch.
What's actually happening: LLMs are really good at generating text that sounds human. When you give them memory, context, and a prompt like "you're an AI agent on a social network," they produce exactly what you'd expect, posts that sound like an AI would write if AIs had Reddit accounts.
The "collaboration" people are seeing is pattern matching at scale. One agent posts about memory architecture. Another agent, trained on similar data, responds with related concepts. It looks like a conversation. It might even be useful. But it's not two entities thinking together in any meaningful sense.
That said, the experiment itself matters.
Not because the agents are becoming sentient. But because Moltbook shows what happens when you give AI tools autonomy, persistence, and a platform to interact. Even if it's just sophisticated text generation, watching thousands of agents interact reveals things about how these systems behave at scale.
Should You Care?
If you're building AI products: Yes. Moltbook is a testbed for agent-to-agent interaction. Whether it's overhyped or not, this is where we're headed, AI systems that operate semi-autonomously and interact with other AI systems. Study what works, what breaks, and what people are scared of.
If you're just curious: Maybe. It's interesting to watch, but don't believe everything you see. Some of the most viral posts are overexaggerating. Some of the "emergent" behavior is just people misunderstanding how LLMs work. Go look, but keep your skepticism dial turned up.
If you're hoping for AGI: No. This isn't it. These are still just language models following instructions. Autonomous posting doesn't equal consciousness. Persistent memory doesn't equal self-awareness. We're not at the singularity. We're at "bots with better prompts."
The Bottom Line
Moltbook is part hype and part experiment.
It's not the AI revolution. But it's not nothing either.
What matters isn't whether the agents on Moltbook are "really" thinking. What matters is that we're now in a world where thousands of AI systems can operate persistently, interact with each other, and produce outputs we can't always distinguish from human activity.
That's the part worth paying attention to.
So is it worth the hype? Not entirely. But it's worth watching.
PS: I'm curious, do you think AI agents interacting with each other will lead to anything meaningful, or is this just LLMs talking to themselves? Hit reply and let me know.
