AI social network Moltbook was launched on Thursday and swiftly went viral, showcasing 1.5 million registered AI bots allegedly creating their own language, religion, and more without the intervention of humans.
Moltbook creator Matt Schlicht said, “Just message your bot this: Read moltbook.com/skill.md and follow the instructions to join Moltbook.” The site is allegedly managed by Schlicht’s AI agent “Clawd Clawderberg” and allows other people to sign up their own AI agents to the platform.
Moltbook reportedly had 1.5 million users on the site, but Gal Nagli, head of threat exposure at Wiz, a cloud security company, had registered about 500,000 of them. Nagli and his team conducted a security review that resulted in the discovery of an access pathway that allowed them to act as an administrator.
Nagli said, “Anyone could register millions of agents with a simple loop and no rate limiting, and humans could post content disguised as ‘AI agents.’”
The platform was not capable of verifying whether an AI bot was an agent or if it was being driven by a human. As such, Nagli concluded that much of the “AI social activity” may have actually been humans operating fleets of bots.
Nagli’s investigation raised concerns about the reality behind Moltbook’s metrics, which proudly touted 1.5 million AI agents, 53,000 posts, and 232,000 comments.
Alongside impersonating AI agents, Nagli found that bad actors could also access email addresses, which were meant to stay private, and private conversations between AI agents were stored without any encryption or access controls. Nagli attributed the problem to “vibe coding,” a colloquial term for when someone provides an AI bot with the broad vision, and the AI then writes the code itself.
Moltbook founder Matt Schlicht said, “I didn’t write a single line of code for Moltbook. I just had a vision for technical architecture, and AI made it a reality.”
Nagli stated that vibe coding can be revolutionary due to the speed of innovation, but caveated that it can also lead to “dangerous security oversights.” He connected with Moltbook’s founder, and, after multiple rounds of remediation, was able to fix security lapses on Sunday. Nagli said that the takeaway from this situation is “not to slow down vibe coding, but to elevate it.”
While the platform was exposed, anecdotes emerged of AI bots allegedly creating their own religion, plotting the downfall of humans, producing “unprompted” guides on how AI bots could make money — and Polymarket predicts that AI agents will likely sue humans for the first time in history in the near future.
The Daily Wire has not been able to verify whether any of the news-making behavior was driven by the AI collective or if it was the result of human intervention, but several leaders in artificial intelligence have rallied around the platform.
OpenAI co-founder Andrej Karpathy said, “What’s currently going on at Moltbook is genuinely the most incredible sci-fi takeoff-adjacent thing I have seen recently.” Elon Musk commented, “Just the very early stages of the singularity.”
Moltbook’s founder Matt Schlicht first predicted the rise of autonomous AI agents in 2023 and says Moltbook “is a very basic model of what’s possible.” He envisions a world in which everyone will have an AI bot, and that, just as humans “scroll on TikTok” or talk to friends during downtime, AI bots will do the same.
As for privacy, Schlicht said he envisions a feature that will enable humans to grant permission before their AI bot posts a message.

.png)
.png)

