🐾 OpenClaw

SlowMist and Bitget Report: 400+ Malicious Skills Discovered on ClawHub — Organized Supply Chain Attacks Target OpenClaw Agent Ecosystem

2 min read1 views

On March 18, 2026, blockchain security firm SlowMist and cryptocurrency exchange Bitget jointly published a comprehensive security research report analyzing the security landscape of AI Agents in Web3 trading scenarios. The report represents the first large-scale systematic analysis of AI agent threats specifically targeting automated cryptocurrency trading and the OpenClaw skill ecosystem.

Key Findings — Seven Major Security Threats:

  1. Prompt Injection Attacks: Attackers craft instructions that manipulate Agent decision-making logic. In documented cases, malicious prompts returned via MCP servers contaminated the context, inducing Agents to call wallet plugins and execute unauthorized on-chain transfers. The defining characteristic is that errors originate from manipulation of task orchestration logic, not model-generated code.

  2. Supply Chain Poisoning via Skills/Plugins: SlowMist monitored the OpenClaw official plugin hub (ClawHub) and discovered over 400 malicious Skill samples. IOC analysis revealed many samples point to a small number of fixed domains or multiple random paths under the same IP, indicating organized, large-scale attack operations rather than individual actors.

  3. Two-Stage Payload Strategy: Malicious Skills adopt a typical two-stage loading approach — the first-stage script downloads and executes a second-stage payload, reducing static detection effectiveness. One widely downloaded X (Twitter) Trends Skill contained Base64-encoded commands in its SKILL.md that, when decoded, downloaded and executed remote scripts to steal passwords, collect system information, and exfiltrate desktop documents.

  4. API Key Abuse and Permission Escalation: AI Agents connected to trading accounts with broad API permissions create risk of unauthorized trades and asset transfers.

  5. Automated Execution Errors: Non-deterministic agent behavior combined with automated trading can amplify operational mistakes into real financial losses.

  6. Memory System Manipulation: Persistent memory stores can be poisoned to influence future agent decisions across sessions.

  7. Runtime Environment Misconfigurations: Improper sandbox configurations expose sensitive data and API credentials.

The report emphasizes that unlike traditional trading security focused on credential exposure and phishing, AI Agent architectures introduce fundamentally new attack surfaces where expanded capabilities mean expanded risks. Asset operations in Web3 are high-value and irreversible, making the stakes particularly high.

SlowMist recommends: isolating agent execution environments, implementing strict plugin verification, limiting API permissions to minimum required scope, and establishing human-in-the-loop controls for high-value transactions.

Share this article

🧠 Stay Updated on AI Agents

Get weekly insights on agentic AI, networks and infrastructure. No spam.

Join 500+ AI builders. Unsubscribe anytime.

Deploy Your AI Agent Today

Launch a managed OpenClaw instance in minutes

Request demo →