Critical OpenClaw Vulnerability Exposes AI Agent Risks

A newly disclosed — and now patched — vulnerability in the fastest-growing AI agent tool in the developer ecosystem underscores the expanding risks organizations face from deploying AI in their environments without adequate security oversight or controls.

The vulnerability in OpenClaw, the open source AI agent that has seen meteoric adoption among developers since its launch last November, allowed a malicious website to hijack a developer’s AI agent without requiring any plug-ins, browser extensions, or user interaction. The vulnerability stemmed from OpenClaw’s failure to distinguish between connections coming from the developer’s own trusted apps and services and connections coming from a malicious website running in the developer’s browser.

High Severity Vulnerability

The OpenClaw team deemed the issue high severity and released a patch for it less than 24 hours after researchers at Oasis Security informed them about the flaw. “If OpenClaw is installed, update immediately,” Oasis recommended. “The fix for this vulnerability is included in version 2026.2.25 and later. Ensure all instances are updated — treat this with the same urgency as any critical security patch.”

Related:Flaw-Finding AI Assistants Face Criticism for Speed, Accuracy

OpenClaw, previously known as MoltBot and before that Clawdbot, is a viral open source AI agent that runs locally on a user’s system as a personal AI assistant. It integrates with messaging apps, calendars, and developer tools, and allows users to automate workflows, manage files, execute shell commands, and take a wide range of other autonomous actions. Developers can also extend its capabilities through community-built plug-ins, or “skills” available via its marketplace, ClawHub. That combination of flexibility, local control, and a fast-growing ecosystem has made it extraordinarily popular among developers in a very short time. In just three months since launch OpenClaw has already become the most starred project on GitHub, surpassing the React JavaScript library for building user interfaces

A Growing List of Security Issues

But that unprecedented adoption speed has also exposed organizations to new security risks, including vulnerabilities like CVE-2026-25253, which gave attackers a way to steal authentication tokens, as well as command injection bugs and prompt injection attacks. Examples include CVE-2026-24763, CVE-2026-25157 and CVE-2026-25475

The growing presence of malicious skills on ClawHub and SkillsMP has been another issue. Researchers at Koi Security recently found that out of 10,700 skills on ClawHub, more than 820 were malicious, a sharp increase from the 324 it had discovered just a few weeks prior in early February. Trend Micro found threat actors using 39 such skills across ClawHub and SkillsMP to distribute the Atomic macOS info stealer.

Related:Claude Code Security Shows Promise, Not Perfection

The vulnerability that Oasis Security discovered stemmed from OpenClaw’s incorrect assumption that any connection originating from localhost — or from within the computer itself — can be implicitly trusted, without accounting for the fact that websites can also originate connections from that same local address. Researchers at Oasis found that if a developer were to visit any attacker-controlled or compromised website, JavaScript on that page could silently open a WebSocket connection directly to the OpenClaw gateway. 

They discovered OpenClaw had set no rate limits or failure thresholds for incorrect passwords, meaning an attacker could use brute-force methods to attempt to guess the password to the gateway without triggering any alert. Once authenticated, the attacker could essentially register any malicious script as trusted and gain full control of the developer’s device.

The Need for a New Approach

The vulnerability comes at a time when stakeholders have expressed growing concern about the ungoverned spread of agentic AI tools within organizations. Randolph Barr, chief information security officer (CISO) at Cequence Security, says the risks mandate the need for security barriers between AI bots and the apps, APIs, and credentials they use. OpenClaw has basic safeguards in place, like authentication, limits on which devices can connect, controls over which tools can run, and the option to sandbox it. “But those controls don’t eliminate risk if the agent runs locally with broad access to files, credentials, and connected systems,” he says in comments to Dark Reading.

Related:Flaws in Claude Code Put Developers’ Machines at Risk

“The real protection comes from layered defenses, MDM enforcement, removing admin rights, scoped credentials, API monitoring, rate limiting, and sandboxing,” Barr says. “Those measures won’t stop every exploit, but they significantly reduce blast radius and limit what an attacker can do if an agent is compromised.” 

Organizations with mature identity, API security, MDM, and logging capabilities, should consider moving their security controls to the execution layer while less mature environments should think about shifting from “authenticate and trust” to continuous verification of behavior, especially for non-human identities like AI agents, he notes.

Organizations should also treat any browser-reachable local AI gateway as an Internet-facing service, according to Jason Soroko, senior fellow at Sectigo. “Remove the browser’s path to it where possible by using Unix domain sockets or named pipes, or by interposing a native companion that owns the connection,” he says. Organizations should also enforce strict Origin allowlisting, require cryptographic client identity such as mTLS or signed challenges, and ban auto-approval based on source IP. 

“Then shrink the blast radius even when a session is established,” Soroko says. “Adopt a capability model that scopes what the agent can do by verb, directory, destination, and time, with step-up consent for high-risk sinks like shell execution, credential access, and large data reads.”


Source: www.darkreading.com…