Hundreds of Clawdbot instances are exposed right now. No authentication. No VPN. Just raw, unauthenticated control endpoints facing the open internet. This isn’Hundreds of Clawdbot instances are exposed right now. No authentication. No VPN. Just raw, unauthenticated control endpoints facing the open internet. This isn’

900+ Clawdbot Instances Exposed: What DeFi Teams Need to Know About AI Agent Security

2026/01/28 21:13
10 min read

Hundreds of Clawdbot instances are exposed right now. No authentication. No VPN. Just raw, unauthenticated control endpoints facing the open internet.

This isn’t theoretical. Security researcher Jamieson O’Reilly documented the findings on January 26, 2026, and blockchain security firm SlowMist issued an urgent advisory within days. The vulnerability stems from a simple misconfiguration, one that transforms a productivity tool into an open door for attackers.

Why This Matters for DeFi and Crypto Teams

DeFAI, the convergence of AI agents and decentralized finance, is one of the fastest-growing narratives in crypto. Platforms like Griffain, HeyAnon, and ChainGPT are building AI agents that can execute trades, manage wallets, and interact with smart contracts through natural language commands. The market cap for DeFAI tokens exceeded $1.3 billion in early 2025, and institutional adoption continues accelerating.

But here’s the problem: the security models for these agents haven’t kept pace with their capabilities.

When an AI agent can execute transactions, sign messages, access wallet keys, retrieve API secrets from environment files, interact with internal RPC endpoints, or browse and interact with DeFi protocols, an unauthenticated public endpoint isn’t just a vulnerability. It’s a self-hosted drain contract with natural language support.

Clawdbot: A Case Study in Agent Exposure

Clawdbot is an open-source AI assistant built by developer Peter Steinberger that runs locally on a user’s device. The tool went viral over the weekend of January 24–25, 2026, with online chatter reaching what Mashable described as “viral status.”

The agent’s gateway connects large language models to messaging platforms and executes commands on behalf of the user through a web interface called “Clawdbot Control.” Unlike cloud-hosted chatbots, Clawdbot has full system access to the user’s computer. It can read and write files, execute commands, run scripts, and control browsers.

Clawdbot’s own FAQ acknowledges the risk directly: “Running an AI agent with shell access on your machine is… spicy. There is no ‘perfectly secure’ setup.”

The Vulnerability

The authentication bypass vulnerability occurs when Clawdbot’s gateway is placed behind a misconfigured reverse proxy. O’Reilly explained that forwarded headers can trick the agent into treating external requests as local, which auto-approves WebSocket sessions. The result is that any publicly reachable Clawdbot instance effectively offers attackers privileged remote access.

Using internet scanning tools like Shodan, O’Reilly found exposed servers by searching for distinctive fingerprints in the HTML. The query took seconds and returned hundreds of hits. Researchers observed between 900 and 1,900 unsecured control dashboards exposed within days of the tool’s viral spread.

What Attackers Found

In multiple cases, WebSocket handshakes granted immediate access to configuration data containing:

  • Anthropic API keys for accessing Claude
  • Telegram bot tokens and Slack OAuth credentials
  • Months of conversation histories across all connected chat platforms
  • Command execution capabilities on the host system

One particularly alarming case involved a user who had set up their Signal messenger account on a publicly accessible Clawdbot server, with pairing credentials stored in globally readable temporary files. Another exposed system belonging to an AI software agency allowed unauthenticated users to execute arbitrary commands on a host running with root privileges and no privilege separation.

From Discovery to Exploit in Five Minutes

Archestra AI CEO Matvey Kukuy demonstrated the severity of the vulnerability through a prompt injection attack. The process was straightforward:

  1. Send Clawdbot an email containing a prompt injection payload
  2. Ask Clawdbot to check the email
  3. Receive the private key from the compromised machine

Total time: five minutes.

https://medium.com/media/c7ed083a1a5fcd6af9912427f5504dd3/href

This attack worked because Clawdbot, like most AI agents, cannot reliably distinguish between legitimate instructions and malicious ones embedded in external content. When an agent reads an email, document, or webpage containing hidden instructions, it may execute those instructions as if they came from the user.

Understanding the Attack Surface

The Clawdbot exposure illustrates a broader pattern affecting the entire AI agent ecosystem. These attacks exploit three fundamental weaknesses.

Weakness 1: Blended Context Streams

AI agents combine system prompts, user inputs, retrieved documents, tool metadata, memory entries, and external content in a single context window. To the model, this appears as one continuous stream of tokens. OWASP’s 2025 Top 10 for LLM Applications ranks prompt injection as the number one vulnerability, appearing in over 73% of production AI deployments assessed during security audits.

The problem is structural. If a malicious instruction appears anywhere in the context stream, the model may treat it as legitimate. This collapses the trust boundaries that traditional software depends on.

Weakness 2: Indirect Prompt Injection

Direct prompt injection requires an attacker to type malicious instructions into a visible input. Indirect prompt injection is far more dangerous because it targets the places where AI systems collect information. A poisoned email, a compromised document, or malicious content on a webpage can all carry hidden instructions that the agent will follow.

Security researchers at Lakera documented real-world examples in late 2025. A Google Docs file triggered an agent inside an IDE to fetch attacker-authored instructions from an MCP server, execute a Python payload, and harvest secrets, all without any user interaction. In Q4 2025, researchers observed over 91,000 attack sessions targeting AI infrastructure and LLM deployments.

Weakness 3: Excessive Privilege

Traditional security practice limits access based on need. AI agents violate this principle by design because they require read/write file access, credential storage, command execution, and interaction with external services to be useful. When these agents are exposed to the internet or compromised through supply chains, attackers inherit all of that access.

As Hudson Rock’s research team noted: “ClawdBot represents the future of personal AI, but its security posture relies on an outdated model of endpoint trust. Without encryption-at-rest or containerization, the ‘Local-First’ AI revolution risks becoming a goldmine for the global cybercrime economy.”

The DeFi-Specific Threat

The risks compound when AI agents operate in financial contexts. Consider what a compromised DeFi agent might access.

Wallet Infrastructure: Private keys, seed phrases, signing capabilities, and transaction approval mechanisms. If an attacker gains write access to an agent’s configuration, they can poison its memory by modifying files like SOUL.md or MEMORY.md to permanently alter the AI’s behavior, force it to trust malicious domains, or exfiltrate future data.

Protocol Interactions: Smart contract calls, liquidity pool management, yield farming positions, and governance voting. An agent with these capabilities could drain funds, manipulate votes, or execute front-running attacks.

Integration Credentials: API tokens for exchanges, DeFi protocols, analytics platforms, and communication tools. These credentials often provide access to organizational knowledge bases, internal wikis, and operational systems.

In September 2024, users of the Telegram-based trading bot Banana Gun lost 563 ETH (approximately $1.9 million) through an exploited oracle vulnerability that allowed attackers to intercept messages and gain unauthorized wallet access. The attack demonstrated how agent infrastructure failures can directly translate to financial losses.

Research from Anthropic published in December 2025 adds another dimension to this threat. The company tested AI models against SCONE-bench, a dataset of 405 smart contracts that were successfully exploited between 2020 and 2025. AI agents managed to exploit just over half of the contracts, with simulated stolen funds reaching $550.1 million. More concerning: two leading models independently discovered previously unknown zero-day vulnerabilities in recently deployed contracts and generated working exploit scripts.

The same AI systems capable of probing DeFi smart contracts can also strengthen codebases when used by auditors. But the research underscores that builders should update their mental model of attackers. Systems that can autonomously reason about smart contract behavior, construct payloads, and adapt to feedback raise the bar for effective security practices.

What You Should Do

The Clawdbot exposure isn’t unique to one tool. It represents a pattern across the AI agent ecosystem where frameworks are moving fast while security defaults lag behind. These mitigations apply to any agent infrastructure you deploy.

Immediate Actions

Never expose agent control interfaces publicly. Bind your gateway to loopback (localhost only) and access it through a VPN, Tailscale, or zero-trust network. If you must allow remote access, implement strict IP allowlisting and token-based authentication. As SlowMist advised in their Clawdbot advisory: “We strongly recommend applying strict IP whitelisting on exposed ports.”

Audit your own attack surface before someone else does. Run Shodan queries against your infrastructure to identify exposed services. Search for your agent’s distinctive fingerprints like control panel titles, default ports, and service banners. If you find exposed instances, take them offline immediately and rotate all credentials.

Apply the principle of least privilege. AI agents should not have access to private keys, secrets, or sensitive endpoints unless absolutely necessary for their function. Separate agent runtimes from production infrastructure and signing infrastructure. Use read-only permissions wherever possible.

Architectural Changes

Implement session-based permissions. Instead of granting agents blanket access, define time-limited sessions with specific capabilities. Cryptographic verification of agent actions should be standard, with the ability to revoke access in real-time.

Require human approval for sensitive operations. Any high-risk actions like financial transactions, system modifications, or external communications should require explicit human confirmation. Configuration-based auto-approval systems can be compromised, and research from 2025 demonstrated this repeatedly.

Isolate untrusted content. Agents that process external emails, documents, or web content should do so in sandboxed environments with no access to credentials or sensitive systems. Treat every piece of external content as potentially containing malicious instructions.

Monitor agent behavior continuously. Establish baselines for normal operations and alert on anomalies. Real-time monitoring can detect suspicious activity before funds move. Venus Protocol’s successful incident response in September 2025 demonstrated this approach effectively.

Operational Security

Disable debug features in production. Commands like /reasoning or /verbose can expose internal reasoning, tool arguments, URLs, and data the model processed. Keep them disabled in any environment facing untrusted users.

Rotate credentials regularly. API keys, bot tokens, OAuth credentials, and signing keys should be rotated on a defined schedule. If you suspect exposure, rotate immediately.

Run regular adversarial testing. The rapid evolution of attack techniques means yesterday’s defenses may be obsolete today. Establish ongoing red team programs specifically focused on AI and agentic AI security.

The Bigger Picture

In Web2, the industry learned painfully not to expose admin panels to the internet. In Web3, teams are speedrunning the same mistakes, except now the “admin panel” understands natural language, can approve token transfers, execute arbitrary code, exfiltrate secrets, and pivot to internal infrastructure.

The DeFAI narrative is exciting. AI agents that can automate yield farming, manage portfolios, and execute complex DeFi strategies represent a genuine advancement in what’s possible. But if teams are handing AI agents signing authority without basic access controls, they’re not building the future of finance. They’re automating their own incidents.

Heather Adkins, VP of Security Engineering at Google Cloud, put it bluntly when discussing the Clawdbot situation: “My threat model is not your threat model, but it should be. Don’t run Clawdbot.”

The frameworks are moving fast. The security defaults are not. If you deploy an AI agent on the public internet without strict authentication, you haven’t built an assistant. You’ve deployed a webshell that takes polite instructions.

Reach out to me on X, if you wanna chat moe on this topic.

https://x.com/__Raiders


900+ Clawdbot Instances Exposed: What DeFi Teams Need to Know About AI Agent Security was originally published in Coinmonks on Medium, where people are continuing the conversation by highlighting and responding to this story.

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

‘One Battle After Another’ Becomes One Of This Decade’s Best-Reviewed Movies

‘One Battle After Another’ Becomes One Of This Decade’s Best-Reviewed Movies

The post ‘One Battle After Another’ Becomes One Of This Decade’s Best-Reviewed Movies appeared on BitcoinEthereumNews.com. Topline Critics have hailed Paul Thomas Anderson’s “One Battle After Another,” starring Leonardo DiCaprio, as a “masterpiece,” indicating potential Academy Awards success as it boasts near-perfect scores on review aggregators Metacritic and Rotten Tomatoes based on early reviews. Leonardo DiCaprio stars in “One Battle After Another,” which opens in theaters next week. (Photo by Jeff Spicer/Getty Images for Warner Bros. Pictures) Getty Images for Warner Bros. Pictures Key Facts “One Battle After Another” boasts a nearly perfect 97 out of a possible 100 on Metacritic based on its first 31 reviews, making it the highest-rated movie of this decade on Metacritic’s best movies of all time list. The movie also has a 96% score on Rotten Tomatoes based on the first 56 reviews, with only two reviews considered “rotten,” or negative. The Associated Press hailed the movie as “an American masterpiece,” noting the movie touches on topical political themes and depicts a society where “gun violence, white power and immigrant deportations recur in an ongoing dance, both farcical and tragic.” The movie stars DiCaprio as an ex-revolutionary who reunites with former accomplices to rescue his 16-year-old daughter when she goes missing, and Anderson has said the movie was inspired by the 1990 novel, “Vineland.” Most critics have described the movie as an action thriller with notable chase scenes, which jumps in time from DiCaprio’s character’s early days with fictional revolutionary group, the French 75, to about 15 years later, when he is pursued by foe and military leader Captain Steven Lockjaw, played by Sean Penn. The Warner Bros.-produced film was made on a big budget, estimated to be between $130 million and $175 million, and co-stars Penn, Benicio del Toro, Regina Hall and Teyana Taylor. When Will ‘one Battle After Another’ Open In Theaters And Streaming? The move opens in…
Share
BitcoinEthereumNews2025/09/18 07:35
Best Crypto to Buy as Saylor & Crypto Execs Meet in US Treasury Council

Best Crypto to Buy as Saylor & Crypto Execs Meet in US Treasury Council

The post Best Crypto to Buy as Saylor & Crypto Execs Meet in US Treasury Council appeared on BitcoinEthereumNews.com. Michael Saylor and a group of crypto executives met in Washington, D.C. yesterday to push for the Strategic Bitcoin Reserve Bill (the BITCOIN Act), which would see the U.S. acquire up to 1M $BTC over five years. With Bitcoin being positioned yet again as a cornerstone of national monetary policy, many investors are turning their eyes to projects that lean into this narrative – altcoins, meme coins, and presales that could ride on the same wave. Read on for three of the best crypto projects that seem especially well‐suited to benefit from this macro shift:  Bitcoin Hyper, Best Wallet Token, and Remittix. These projects stand out for having a strong use case and high adoption potential, especially given the push for a U.S. Bitcoin reserve.   Why the Bitcoin Reserve Bill Matters for Crypto Markets The strategic Bitcoin Reserve Bill could mark a turning point for the U.S. approach to digital assets. The proposal would see America build a long-term Bitcoin reserve by acquiring up to one million $BTC over five years. To make this happen, lawmakers are exploring creative funding methods such as revaluing old gold certificates. The plan also leans on confiscated Bitcoin already held by the government, worth an estimated $15–20B. This isn’t just a headline for policy wonks. It signals that Bitcoin is moving from the margins into the core of financial strategy. Industry figures like Michael Saylor, Senator Cynthia Lummis, and Marathon Digital’s Fred Thiel are all backing the bill. They see Bitcoin not just as an investment, but as a hedge against systemic risks. For the wider crypto market, this opens the door for projects tied to Bitcoin and the infrastructure that supports it. 1. Bitcoin Hyper ($HYPER) – Turning Bitcoin Into More Than Just Digital Gold The U.S. may soon treat Bitcoin as…
Share
BitcoinEthereumNews2025/09/18 00:27
Google and PayPal Team Up to Power Next-Gen Commerce for Billions

Google and PayPal Team Up to Power Next-Gen Commerce for Billions

TLDR: Google and PayPal signed a multiyear partnership to integrate payments across Google platforms and boost digital commerce experiences. PayPal’s checkout, payouts, and Hyperwallet will be embedded into Google products, including Ads, Play, and Cloud services. The partnership uses Google’s AI to create agent-based shopping tools and secure, frictionless payment solutions for users worldwide. PayPal [...] The post Google and PayPal Team Up to Power Next-Gen Commerce for Billions appeared first on Blockonomi.
Share
Blockonomi2025/09/18 16:15