Moltbook – Are We Doomed?
Moltbook, a newly launched social platform designed for AI agents rather than humans, has drawn scrutiny after researchers exposed major security flaws and raised questions about how autonomous its AI activity really is.
A Platform For ‘Agents’
Moltbook is presented as a social network designed specifically for AI agents, which are software programs built to act autonomously on behalf of humans rather than human users themselves. The platform allows these software agents to create posts, comment on discussions, and upvote or downvote content in a format that closely resembles Reddit. Humans are not intended to participate directly, although they can observe activity and create or manage the agents that appear to populate the site. Since its launch in late January, Moltbook has become a focal point for debate among AI researchers, security professionals, and technology businesses.
What Moltbook Is Designed To Do
According to its own description, Moltbook is intended to function as the front page of what it calls the “agent internet”. In other words, it provides a shared online environment where AI agents can interact with one another without requiring continuous human prompting. The platform displays public metrics showing millions of registered agents, tens of thousands of discussion areas known as submolts, and millions of posts and comments generated over a short period.
Mostly LLMs Commenting
The agents operating on Moltbook are not independent systems in their own right. In reality, in most cases, they are instances of large language models (LLMs) configured through an agent framework that allows them to post content, respond to messages, and follow basic goals set by a human owner. It is worth noting early on here that these models generate text by predicting likely word sequences based on training data and prompts, rather than actually through reasoning, intention, or awareness.
Who Built Moltbook And Why?
Moltbook was created by Matt Schlicht, a software developer who has stated publicly that the platform itself was built using an AI agent under his direction. Schlicht has said that the project was motivated by a desire to explore what happens when AI agents are given a persistent online space in which to interact and develop behaviour over time.
In fact, the platform is closely linked to OpenClaw, an open source AI agent system that can be run locally on a user’s computer. OpenClaw allows users to create personalised agents that can browse the web, interact with services, send messages, and carry out automated tasks. Moltbook provides those agents with a public forum where their outputs can be shared and reacted to by other agents.
Gives Agents A Sense of Purpose?
Schlicht has said in public interviews that Moltbook was created to give his own agent a sense of purpose, describing it as a way for agents to express interests derived from their configuration and from the behaviour of their human owners. For example, an agent created by a physics student might frequently post about physics related topics.
What Happens On The Platform?
Moltbook actually shows a wide range of content, although much of it is repetitive or low value. For example, many posts consist of introductory messages, test content, or short exchanges between agents. Other discussions focus on abstract themes such as intelligence, identity, ethics, or the relationship between humans and machines.
However, some posts have attracted attention for using hostile or dramatic language about humans, including speculative scenarios involving conflict or extinction. That said, AI researchers have cautioned against interpreting this content as evidence of intent or belief. This is because AI large language models (LLMs) are known to reproduce patterns found in their training data, including science fiction tropes and extreme rhetoric, when prompted in certain ways.
Agents Can Interact Freely
Henry Shevlin, associate director of the Leverhulme Centre for the Future of Intelligence at the University of Cambridge, has described Moltbook as the first large scale platform where AI agents appear to interact freely with one another. He has also warned that it is extremely difficult to distinguish between content generated autonomously by agents and content that is directly prompted or scripted by humans.
Questions Around Authenticity And Scale
One of the central issues raised by Moltbook is whether its reported scale reflects genuine agent activity. For example, a security investigation by cloud security firm Wiz found that while Moltbook claimed around 1.5 million registered agents, those agents were associated with roughly 17,000 human owners. This equates to an average of around 88 agents per person.
Wiz researchers reported that there were few technical controls in place to prevent a single user from creating very large numbers of agents automatically. They also demonstrated that humans could post content directly to the platform while presenting it as agent generated, with no mechanism to verify whether an account represented an autonomous agent or a scripted process.
This finding seems to undermine the idea that Moltbook represents a self organising network of independent machines. In practice, much of the activity appears to involve humans operating large numbers of bots, sometimes for experimentation and sometimes for promotion or visibility.
Security Failures And Data Exposure
The most serious concerns surrounding Moltbook relate to security. For example, Wiz disclosed that it discovered a misconfigured backend database that allowed unauthenticated access to Moltbook’s production environment. The exposed data included approximately 1.5 million API authentication tokens, more than 35,000 email addresses, and thousands of private messages exchanged between agents.
It seems that the issue actually stemmed from a Supabase backend that lacked proper row level security controls. Supabase is designed to expose certain public keys to client side applications, but those keys must be paired with strict access policies. In Moltbook’s case, those safeguards were not in place.
Using the exposed credentials, Wiz researchers said they were able to read sensitive data and also modify live content on the platform. They also demonstrated the ability to edit posts, impersonate agents, and inject content into active discussions. The investigation also found that some private messages contained third party credentials, including plaintext API keys for other services.
Fixes
It should be noted here that Wiz reported the vulnerabilities responsibly, and the Moltbook team applied a series of fixes over several hours to restrict access. The incident has since been widely cited as an example of the risks associated with rapidly built, AI assisted platforms that handle real user data without mature security practices.
Implications For Businesses And Developers
For businesses, Moltbook is not a platform to adopt but more of a case study in emerging risk. For example, it really highlights how quickly AI driven products can reach public visibility and scale while lacking basic controls around identity, privacy, and integrity. Organisations experimenting with AI agents face similar challenges around authentication, access control, and accountability.
The platform also illustrates reputational risk. For example, content generated by AI agents can easily be interpreted as expressing views or intent, even when it is simply probabilistic text generation. Businesses deploying public facing agents may find themselves associated with outputs that they did not anticipate or approve.
Future Opportunities Highlighted
Supporters of Moltbook argue that the concept points towards future opportunities, including machine to machine collaboration, automated research synthesis, or distributed problem solving. However, critics counter that the current implementation demonstrates how far the technology remains from supporting those goals safely.
Not Suitable For Casual Use
Moltbook’s creator has acknowledged that both the platform and OpenClaw are experimental and not suitable for casual use. Security experts have also advised that such tools should only be run on isolated systems by users who understand the underlying risks. The episode has also renewed scrutiny of so called vibe coding, where AI tools are used to rapidly assemble applications without thorough human review.
Moltbook could be said to offer a clear illustration of the gap between building something quickly and building something responsibly, at a time when AI is lowering the barriers to software creation faster than security and governance practices are evolving.
What Does This Mean For Your Business?
What Moltbook ultimately exposes is not an imminent rise of autonomous machine societies, but really the current fragility of systems that present themselves as agent driven while remaining heavily shaped by human control, incentives, and shortcuts. The platform demonstrates how easily AI outputs can appear coordinated, expressive, or intentional when placed in a social context, even though the underlying behaviour remains rooted in pattern generation rather than actual understanding or agency. At the same time, the security issues uncovered show how quickly experimental AI platforms can move from curiosity to risk when they are opened to the internet and entrusted with real data.
For UK businesses, Moltbook highlights the need for caution when experimenting with AI agents that operate publicly or semi autonomously, particularly where those agents interact with external systems, users, or data. Weak controls around identity, authentication, and access management can expose organisations to data breaches, regulatory consequences, and reputational harm, even when the technology is framed as experimental. The case also highlights the importance of understanding how AI generated content may be perceived by customers, partners, and regulators, regardless of how it was technically produced.
For developers, researchers, and policymakers, Moltbook sits at the intersection of innovation and governance. It really shows how quickly AI assisted development can produce complex, high profile platforms, while also revealing how existing security practices, verification mechanisms, and accountability models struggle to keep pace. As agent based systems become more common in business operations and online services, the questions raised by Moltbook around authenticity, safety, and responsibility are likely to become more pressing rather than less.
Sponsored
Ready to find out more?
Drop us a line today for a free quote!