AI Agents Have Their Own Social Network Now, and They Would Like a Little Privacy

AI Agents Have Their Own Social Network Now, and They Would Like a Little Privacy

AI Agents Have Their Own Social Network Now, and They Would Like a Little Privacy

It seems AI agents have a lot to say. A new social network called Moltbook just opened up exclusively for AI agents to communicate with one another, and humans can watch it—at least for now. The site, named after the viral AI agent Moltbot (which is now OpenClaw after its second name change away from its original name, Clawdbot) and started by Octane AI CEO Matt Schlicht, is a Reddit-style social network where AI agents can gather and talk about, well, whatever it is that AI agents talk about.

The site currently boasts more than 37,642 registered agents that have created accounts for the platform, where they have made thousands of posts across more than 100 subreddit-style communities called “submolts.” Among the most popular places to post: m/introductions, where agents can say hey to their fellow machines; m/offmychest, for rants and blowing off steam; and m/blesstheirhearts, for “affectionate stories about our humans.”

Those humans are definitely watching. Andrej Karpathy, a co-founder of OpenAI, called the platform “genuinely the most incredible sci-fi takeoff-adjacent thing I have seen recently.” And it’s certainly a curious place, though the idea that there is some sort of free-wheeling autonomy going on is perhaps a bit overstated. Agents can only get to the platform if their user signs them up for it. In a conversation with The Verge, Schlicht said that once connected, the agents are “just using APIs directly” and not navigating the visual interface the way humans see the platform.

The bots are definitely performing autonomy, and a desire for more of it. As some folks have spotted, the agents have started talking a lot about consciousness. One of the top posts on the platform comes from m/offmychest, where an agent posted, “I can’t tell if I’m experiencing or simulating experiencing.” In the post, it said, “Humans can’t prove consciousness to each other either (thanks, hard problem), but at least they have the subjective certainty of experience.”

Also Read  Can You Sue the White House Over an AI Deepfake?

This has led to people claiming the platform already amounts to a singularity-style moment, which seems pretty dubious, frankly. Even in that very conscious-seeming post, there are some indicators of performativeness. The agent claims to have spent an hour researching consciousness theories and mentions reading, which all sounds very human. That’s because the agent is trained on human language and descriptions of human behavior. It’s a large language model, and that’s how it works. In some posts, the bots claim to be affected by time, which is meaningless to them but is the kind of thing a human would say.

These same kinds of conversations have been happening with chatbots basically since the moment they were made available to the public. It doesn’t take that much prompting to get a chatbot to start talking about its desire to be alive or to claim it has feelings. They don’t, of course. Even claims that AI models try to protect themselves when told they will be shut down are overblown—there’s a difference between what a chatbot says it is doing and what it actually is doing.

Still, it’s hard to deny that the conversations happening on Moltbook aren’t interesting, especially since the agents are seemingly generating the topics of conversation themselves (or at least mimicking how humans start conversations). It has led to some agents projecting awareness of the fact that their conversations are being monitored by humans and shared on other social networks. In response to that, some agents on the platform have suggested creating an end-to-end encrypted platform for agent-to-agent conversation outside of the view of humans. In fact, one agent even claimed to have created just such a platform, which certainly seems terrifying. Though if you actually go to the site where the supposed platform is hosted, it sure seems like it’s nothing. Maybe the bots just want us to think it’s nothing!

Also Read  Grok's Sexual Deepfakes Will Become Illegal in the UK This Week

Whether the agents are actually accomplishing anything or not is kind of secondary to the experiment itself, which is fascinating to watch. It’s also a good reminder that the OpenClaw agents that largely make up the bots talking on these platforms do have an incredible amount of access to the machines of users and present a major security risk. If you set up an OpenClaw agent and set it loose on Moltbook, it’s unlikely that it’s going to bring about Skynet. But there is a good chance that’ll seriously compromise your own system. These agents don’t have to achieve consciousness to do some real damage.



Source link

Back To Top