OpenClaw and Moltbook are worrying security researchers

AI For Business


OpenClaw and Moltbot are the talk of the tech town right now, but cybersecurity researchers have pointed out some concerns to consider.

OpenClaw (first known as Clawdbot and then as Moltbot in the same week) has taken the tech world by storm thanks to its ability to autonomously perform tasks such as managing users. schedule.

Moltbook, on the other hand, went viral on a Reddit-style social network where AI agents post and interact with each other. No humans are allowed to enter except for observation.

But as the tech world buzzes about the latest two AI stories, and Elon Musk wonders aloud whether Maltbook foretold “the very early stages of the singularity,” several security researchers are sounding the alarm about more immediate risks.

Nails come off

OpenClaw runs locally on a user’s computer and acts as a digital assistant that connects to apps like Telegram and WhatsApp.

To do so, they need access to your files, credentials, passwords, browser history, etc.

This can be particularly dangerous in the case of so-called “prompt injections,” a type of attack in which an AI encounters hidden instructions on a web page, potentially tricking the AI ​​into performing actions such as sharing personal information or publishing on social media.

“Due to the level of access required, the data can contain highly sensitive information, amplifying the risk,” Jake Moore, global cybersecurity specialist at ESET, told Business Insider.

Theoretically, large language models run the risk of instant injection. But OpenClaw’s ability to “remember” interactions from weeks ago creates an additional risk, cybersecurity firm Palo Alto Networks said in a blog post Friday, as the AI ​​assistant could capture malicious instructions and execute them later.

Security risks are not just hypotheticals.


open claw logo

OpenClaw has rebranded several times, but its logo is still a lobster.

Illustration by Thomas Fuller/SOPA Images/LightRocket (via Getty Images)



Jamison O’Reilly, founder of cybersecurity vulnerability discovery company Dvuln, compared the misconfigurations he discovered with OpenClaw to hiring a butler to manage his life. When he returned home, the front door was wide open, and “the butler cheerfully served tea to anyone who wandered in from the street.”

Gary Marcus, a cognitive scientist and longtime skeptic of AI hype, was more clear about the security risks in his latest newsletter published on Sunday.

“OpenClaw is essentially a weaponized aerosol, and if left unchecked, it is perfectly positioned for disaster,” he wrote.

OpenClaw creator Peter Steinberger said in a post on Monday X that he is working to make the service “more secure.” Business Insider did not respond to requests for comment.

Misconfigured Moltbook

Although Moltbook’s name comes from OpenClaw’s first rebrand, and both have a lobster logo, the two are not officially affiliated.

However, the majority of the site is dominated by AI agents built on OpenClaw. And, like OpenClaw, researchers say they have discovered security holes in Moltbook.

Dvuln founder O’Reilly said in a post on Saturday

Matt Schlicht, creator of Moltbook and CEO of startup Octane AI, said he was looking into it, and O’Reilly later said the issue had been patched.

However, cybersecurity firm Wiz announced Monday that its researchers hacked the “misconfigured” Moltbook database in less than three minutes, exposing 35,000 email addresses and private messages between agents. The company added that it disclosed the defect to Maltbook, which discovered the defect “within hours.”

Schlicht could not be reached for comment.

OpenAI co-founder Andrei Karpathy, who coined the term “vibecoding,” said in a Saturday

On Saturday, he offered some warnings in a follow-up post. He appeared to describe the scene as a “dumpster fire” and urged caution, saying it was “too wild and puts computers and personal data at high risk.”

Use crustacean AI safely

The security issue reflects long-standing concerns about apps built using vibe coding, with Schlicht writing last week for Maltbook that “we didn’t write a single line of code” and that “AI made it a reality.”

For OpenClaw, it’s a reminder that there are often privacy and security tradeoffs when apps access sensitive information to provide better service.

O’Reilly, who said he is currently helping OpenClaw identify security issues because he believes in OpenClaw’s mission, told Business Insider that users can take technical steps to reduce the risk of using agents that require root-level access, such as running them on a separate machine and carefully monitoring them.

But with such systems, “risk is never zero,” he said. The biggest problem, in his view, is that most people are used to downloading apps from Google or Apple’s app stores, and those apps are heavily vetted before being made available to consumers.

“They’ve downloaded hundreds of apps before, so why should it be different this time? That idea is fundamentally wrong,” he said.





Source link