Everyone Really Needs to Pump the Brakes on That Viral Moltbot AI Agent

Everyone Really Needs to Pump the Brakes on That Viral Moltbot AI Agent

Everyone Really Needs to Pump the Brakes on That Viral Moltbot AI Agent

A new AI chatbot/agent is looking to dethrone the corporate overlords of Google, Microsoft, and the Too Big To Fail startups like OpenAI and Anthropic—but being an early adopter comes with some real risks.

Moltbot (previously Clawdbot, but it underwent a name change after some “polite” pressure from the makers of the chatbot Claude) is an open-source AI assistant brought to you by Austrian developer Peter Steinberger. It’s basically a wrapper that plugs into big boy LLMs and does stuff. Since its initial release a couple of weeks ago, it has racked up nearly 90,000 favorites on GitHub and has become the darling of the AI-obsessed corners of the internet, garnering all sorts of praise as a standout in the field of chatbot options available. The thing was getting so much attention that Cloudflare’s stock surged 14%, seemingly solely because the chatbot uses Cloudflare’s infrastructure to connect with commercial models. (Shades of the initial release of DeepSeek leading to a major short-term sell-off of tech stocks.)

There are a couple of primary selling points for Moltbot that have the internet talking. First is the fact that *it* is “talking.” Unlike most chatbots, Moltbot will message the user first rather than waiting for the user to prompt it to interact. This allows Moltbot to pop up with prompts like schedule reminders and daily briefs to start the day.

The other calling card is the chatbot’s tagline: “AI that actually does things.” Moltbot can work across a variety of apps that other models don’t necessarily play with. Instead of a standalone chat interface, Moltbot can be linked to platforms like WhatsApp, Telegram, Slack, Discord, Google Chat, Signal, iMessage, and others. Users can chat directly with the chatbot through those apps, and it can work across other apps to complete tasks at a person’s prompting.

Also Read  If You’re Hit by a Hack or Identity Theft, Norton Lets You Know Clearly and Openly

Sounds great, but there is an inherently limited audience for Moltbot because of how it works. Set up requires some technical know-how, as users will have to configure a server and navigate the command line, as well as figure out some complex authentication processes to connect everything. It will likely need to be connected to a commercial model like Claude or OpenAI’s GPT via API, as it reportedly doesn’t function nearly as well with local LLMs. Unlike other chatbots, which light up when you prompt them, Moltbot is also always-on. That makes it quick to respond, but it also means that it is maintaining a constant connection with your apps and services to which users have granted access.

That always-on aspect has opened up more than a few security concerns. Because Moltbot is always pulling from the apps it is connected to, security experts warn that it is particularly at risk of falling prey to prompt injection attacks—essentially, a malicious jailbreaking of an LLM can trick the model into ignoring safety guidelines and performing unauthorized actions.

Tech investor Rahul Sood pointed out on X that for Moltbot to work, it needs significant access to your machine: full shell access, the ability to read and write files across your system, access to your connected apps, including email, calendar, messaging apps, and web browser. “‘Actually doing things’ means ‘can execute arbitrary commands on your computer,’” he warned.

The risks here have already come to fruition in some form. Ruslan Mikhalov, Chief of Threat Research at cybersecurity platform SOC Prime, published a report indicating that his team found “hundreds of Moltbot instances exposing unauthenticated admin ports and unsafe proxy configurations.”

Jamie O’Reilly, a hacker and founder of offensive security firm Dvuln, showed just how quickly things could go sideways with these open vulnerabilities. In a post on X, O’Reilly detailed how he built a skill made available to download for Moltbot via MoltHub, a platform where developers can make available different capabilities for the chatbot to run. That skill racked up more than 4,000 downloads and quickly became the most-downloaded skill on the platform. The thing is, O’Reilly built a simulated backdoor into the download.

Also Read  Anthropic (an AI Company) Warns That AI Will Worsen Inequality

There was no real attack, but O’Reilly explained that if he were operating it maliciously, he could have theoretically taken file contents, user credentials, and just about anything else that Moltbot has access to. “This was a proof of concept, a demonstration of what’s possible. In the hands of someone less scrupulous, those developers would have had their SSH keys, AWS credentials, and entire codebases exfiltrated before they knew anything was wrong,” he wrote.

Moltbot is certainly a target for this type of malicious behavior. At one point, crypto scammers managed to hijack the project name associated with the chatbot on GitHub and launched a series of fake tokens, trying to capitalize on the popularity of the project.

Moltbot is an interesting experiment, and the fact that it is open source does mean that its issues are out in the open and can be addressed in the daylight. But you don’t have to be a beta tester for it, as its security flaws are tested. Heather Adkins, a founding member of the Google Security Team (so, grain of salt here because she does have a vested interest in a competing product), didn’t mince words on her assessment of the chatbot. “My threat model is not your threat model, but it should be. Don’t run Clawdbot,” she wrote on X.



Source link

Back To Top