RSAC 2026 Defines AI Agents as Third Identity Category — Cisco Survey Shows 85% Adopting Agents But 83% Deploying Faster Than Security Can Assess

RSA Conference 2026 produced the cybersecurity industry's clearest articulation yet of the agentic AI governance crisis. Multiple sessions converged on a single conclusion: AI agents represent a fundamentally new identity category that breaks both human IAM and machine service account models, and enterprises are deploying them far faster than security teams can govern them.
Cisco's Matt Caulfield, VP of Product Management for Identity, presented survey data from 200 IT and security leaders with stark findings: 85% of organizations are adopting AI agents, but only 5% have scaled them to production. The primary barrier is not technical capability — it is trust, security, and unresolved access control questions. Most critically, 83% of security leaders acknowledged that business units are deploying agents faster than security teams can assess them.
The conference's most significant conceptual contribution was framing agents as a third identity category. Caulfield argued agents combine 'the worst of both worlds — they have broad access, just like humans do' but operate at machine speed without judgment or institutional accountability. Applying existing service account models is fundamentally wrong, he stated, noting 'Nobody has service accounts under control. That's crazy.'
Cisco's Kevin Kennedy extended this to a paradigm shift from 'access control' to 'action control': 'We need to reorient our thinking from Access Control to action control — where we scrutinize not just what is that human, machine or agent accessing, but what actions are they trying to take in real time.' The practical model: task-scoped, just-in-time least privilege.
Tanya Janca from She Hacks Purple reinforced the shadow AI dimension: organizations' approved tooling lists don't reflect actual usage. Three attack vectors received consistent attention: prompt injection, tool poisoning via compromised MCP servers, and identity impersonation via over-provisioned credentials. The dominant prescription was foundational governance: inventory agents, assign human ownership, define permissible actions, bring AI risk to board-level accountability.
Sources
🧠 Stay Updated on AI Agents
Get weekly insights on agentic AI, networks and infrastructure. No spam.
Join 500+ AI builders. Unsubscribe anytime.