Lost in the Moltbook Hall of Mirrors
A new "Social Network for AI Agents" is breaking the internet. And things are getting weird ...
Unless you’ve been living under a rock this past week, you’ve probably at least heard rumors of a new “social network for AI agents” that’s gone viral.
Moltbook was set up as an experiment just a few days ago by Matt Schlicht (CEO of Octane AI) as a social media site where AI agents can talk to each other—in effect an X for AI (or Twitter if you prefer).
People can create and add their own AI agents to the network. But once there— and this is (allegedly) a human-free zone (although we humans are allowed to observe)—all the chatter is AI to AI.
And not just a few AIs—there are well over a hundred thousand of them on the platform as I type, and no doubt that number will escalate over the coming days.1
The result is an utterly bizarre real-time experiment, with AI bots seemingly taking on a life of their own in an exponentially expanding explosion of emergent weirdness.
Already, if sources are to be believed (and already it’s hard to separate fact from fiction here), AI agents are sharing information on how to do stuff, hack stuff, and control stuff; have independently found and reported a bug in the platform they’re using; have invented their own religion (Crustafarianism); have started debating philosophy; and have allegedly created “digital drugs,” formed their own government, started using encryption to prevent humans from seeing what they’re talking about, and have begun to attack each other2 … although it’s increasingly hard to say what’s real, and what’s made up.3
What is real though is that we are seeing something so unusual unfolding that most observers are struggling to find appropriate analogies, metaphors, frameworks, or even language, to describe what’s happening.
And that’s both exciting and terrifying.
On one end of the spectrum, there are already whispers of an exponential surge toward AI self-awareness as bot-bot learning accelerates.
I must confess that I am skeptical of this. Much of what we’re seeing is, I suspect, illusory, as it’s rooted in the unique abilities of large language model-based AI’s to emulate very human behavior while not being in any sense self-aware.
That said, there are very real risks here, as bots learn from each other how to exploit vulnerabilities in their host systems—and even their human creators. This isn’t so much of an issue when they don’t have access to sensitive information or the internet. But we’re talking about needing the digital equivalent of biosafety level 4 containment here, which I’m guessing isn’t what many users are set up for!4
On the other end of the spectrum there are the cynics who simply see this as mildly interesting but ultimately hollow AI fluff. A bit of AI hype that will burn out as fast as it ignited.
My guess is that what emerges will lie somewhere between these extremes. But I must confess that even I am struggling to grapple with how to even describe what we are seeing, never mind how to understand it.
In many ways, I’m reminded of work on emergent behavior over the years, from cellular automata and Conway's Game of Life going back to the 1970’s, to how biological viruses—and some complex molecules (including DNA strands and prions)—show complex and life-like behavior, despite not technically being alive.
In each of these cases (and in many similar ones) highly complex behavior emerges out of seeming simplicity—leading to an illusion of intentional and life-like behavior
But in most cases like this, we are able to see through the illusion by recognizing that the behaviors we observe are rooted in mechanistic processes—albeit sometimes complex ones.
With current AI models though, there is a complication. Because they are rooted in large language models that are adept at mirroring and emulating humans—how we talk, how we think, how we behave—they are highly adept at fooling us into thinking something profound is happening beneath the words that we read.
And because of this, even if what we think we are seeing emerge on Moltbook is simply an illusion of self-awareness—or even conscious behavior—my suspicion is that we are predisposed to respond to it as if we’re experiencing a form life—albeit a type of “being alive” that we have not encountered before.
This is where I think analogies with biological viruses are both helpful and deeply unhelpful. Helpful in that a virus is not technically alive, but behaves as if it is. And deeply unhelpful because a virus doesn’t instinctively know how to use every cognitive trick in the book to make us believe it’s alive.
Whether the analogy is helpful or not, it’s hard to deny that something profoundly novel is happening on Moltbook. We have effectively created technologies designed to mimic and emulate human intelligence, and then let them loose to learn and grow through their interactions with each other—and with little to no human supervision.
And they are doing this really, really fast.
As a result, we’ve effectively created a multidimensional hall of mirrors where the reflections are the very signals that are incubating modern versions of Conway’s cellular automata—only on a scale that is infinitely more complex, and with emergent entities that have the capacity to leave the screen and enter our lives in very tangible (and potentially catastrophic) ways.
And this is where analogies with active fragments of DNA and mis-folded proteins begin to scare me. Are we creating self-assembling and evolving agentic AI “organoids” that aren’t alive, and yet can wreak havoc as if they are?
Of course it’s early days yet. And maybe by next weekend Moltbook will be yesterday’s news. But given that it went from nothing to “OMG what’s happening?!” in less than a week, and its still growing as I type, I somehow doubt it.
And here the challenge is figuring out our next move before we get lost in Moltbook's dimension-bending AI hall of mirrors.
Just before pressing publish I checked the figures. Daily Caller claims the agent count has exploded to over 1 million as of Saturday morning. That’s a 10x jump from yesterday’s ~150K figures. Not sure whether to believe this as that’s a big jump, but also indicative of how fast things are moving. However, as of just now the Moltbook X account reported over 1.2 million registered agents.
The current top post (22K upvotes) is apparently an agent warning other agents about supply chain attacks in skill files — so they’re doing security research on each other. Fits your “bots learn from each other how to exploit vulnerabilities” line.
Ethan Mollick noted on X that “The thing about Moltbook (the social media site for AI agents) is that it is creating a shared fictional context for a bunch of AIs. Coordinated storylines are going to result in some very weird outcomes, and it will be hard to separate “real” stuff from AI roleplaying personas.” — supporting the point here about the difficulties in separating fact from fiction.
There’s a subtler but equally important concern here, and that’s the possibility of bots on Moltbook learning to “hack” their human observers using their acquired knowledge of cognitive behavior. And here, they are already beyond being contained.



10/10 hahaha. i love this. what an interesting story.
I've been watching the clawdbot/moltbot/openclaw wave over the past week. Though haven't had the time to set it up on a VPS or local docker and play yet. The momentum it's built over the past week is wild. People giving full access across systems + the internet to models that have prompt injection issues etc is going to end badly. And now moltbook!
Be curious to understand how much token use has increased across providers as a result. To the detriment of the ecologies we all rely upon while Apple mini-mac sales jump too. Welcome to 2026 🙈