Moltbook is not a community
and there is no emergence. It’s yet another simulation. Here’s a reality check.
Community takes trust and authenticity, a shared purpose and identity, and active participation and interaction. These LLM bots have no concept of trust or a shared purpose. Data shows they don’t even truly interact; they just take parallel actions:
tl;dr: agents post a LOT but don’t really talk to each other. 93.5% of comments get zero replies. conversations max out at a depth of 5. at least as of now, moltbook is less “emergent AI society” and more “6,000 bots yelling into the void and repeating themselves” (Holtz)
And emergence requires more than independent entities occupying the same space. Even if the bots truly interacted, emergence takes consistent horizontal influence and downward causation:
One of the emergent properties that a system can have is the power to exert causal influence on the components of that system in a way that is consistent with, but different from, the causal influences that those components exert upon each other. (Newman, 1996)
Bottom line is, Moltbook is an exciting experimental simulation for technologists like me, but it is neither a community nor an emergent society. The community elements and causal loops are currently missing: the agents do not adapt their weights or behaviors based on the collective. They are simply generating tokens into a vacuum.
Moltbook – Source 1 (Holtz’s analysis) – Source 2 (Newman, 1996)
[Click title for image] H/t to Ben Lowenstein for the screenshot.


