🤖 Agentic AI

Arkose Labs Report: 97% of Enterprises Expect Material AI Agent Security Incident Within 12 Months, Only 6% of Budgets Allocated

2 min read1 views

On April 2, 2026, Arkose Labs published its 2026 Agentic AI Security Report, based on a global survey of 300 enterprise leaders across security, fraud, identity, and AI functions, including organizations operating at Fortune 100 scale across North America, Europe, and Asia-Pacific.

Key findings:

  1. NEAR-UNIVERSAL THREAT EXPECTATION: 97% of respondents expect a material AI-agent-driven security or fraud incident within the next 12 months. Nearly half expect one within six months. This is not a theoretical concern — it is a consensus prediction from the people responsible for enterprise security.

  2. MASSIVE BUDGET-AWARENESS GAP: Only 6% of security budgets are currently allocated to AI agent risk. One in ten organizations do not track AI-agent risk separately at all. Over half (57%) report having no formal AI-agent governance controls in place today.

  3. DETECTION CRISIS: More than 70% of security teams are not confident their tools will scale as AI-driven attacks evolve. Model drift, adaptive bypass techniques, and fragmented signals across systems are cited as primary concerns.

  4. ATTRIBUTION IMPOSSIBLE: Only 26% of enterprise leaders are very confident they could definitively prove that an AI agent caused a security or fraud incident. Movement between interconnected systems resembles legitimate operational behavior, making forensics extremely difficult.

  5. AI AGENTS AS INSIDER THREATS: 87% of enterprise leaders agree that AI agents operating with legitimate credentials pose a greater insider threat risk than human employees. Traditional security models assumed insider threats come from people — AI agents now operate through service accounts, API tokens, and application identities with significant privileges.

  6. GOVERNANCE VACUUM: 57% have no formal governance controls for AI agents today, yet 88% expect defined or mature frameworks within three years. This three-year window is the period of maximum exposure.

Frank Teruel, COO of Arkose Labs, stated: In the rush to benefit from the amazing productivity and efficiency gains that agentic AI represents, many companies deployed it broadly before fully considering the identity, security and governance issues involved.

The report identifies three critical operational vulnerabilities: the Detection Illusion (current tools will not hold), the Attribution Crisis (cannot prove agent involvement), and the Governance Vacuum (no formal controls despite recognized risk).

Recommendations include: integrating security leadership into AI deployment from the start, treating non-human identities (NHIs) as first-class security entities with the same rigor as human accounts, and establishing visibility into automated decision chains with proper telemetry and logging infrastructure.

Share this article

🧠 Stay Updated on AI Agents

Get weekly insights on agentic AI, networks and infrastructure. No spam.

Join 500+ AI builders. Unsubscribe anytime.

Deploy Your AI Agent Today

Launch a managed OpenClaw instance in minutes

Request demo →