Proofpoint Launches AI Security Platform with Agent Integrity Framework — First Intent-Based Detection for Autonomous Agent Behavior

Proofpoint has announced Proofpoint AI Security, a comprehensive security solution that introduces intent-based detection for AI agent behavior across enterprise environments. The platform, built on technology from recently acquired Acuvity, represents a new approach to securing autonomous AI agents by evaluating whether agent actions align with the original user intent rather than just monitoring traffic patterns.
The core innovation is intent-based detection models that continuously evaluate whether AI behavior — initiated by a human or an autonomous agent — aligns with the original request, defined policies, and intended purpose. Traditional security tools can see traffic, identities, or permissions, but cannot determine whether an AI actions are contextually appropriate.
Key capabilities:
- Intent-based detection across all AI interactions (human-initiated and autonomous)
- Multi-surface control points covering endpoints, browser extensions, and MCP connections
- Discovery of sanctioned and unsanctioned AI tools including OpenClaw, Ollama, ChatGPT, and MCP servers
- Observation of prompts, responses, and data flows during AI tool usage
- Runtime inspection and policy enforcement during live AI interactions
- Access controls and guardrails on AI usage
The Agent Integrity Framework, based on Acuvity research, defines five pillars: Intent Alignment, Identity and Attribution, Behavioral Consistency, Auditability, and Operational Transparency. It includes a five-phase maturity model from initial discovery through runtime enforcement.
Acuvity research cited by Proofpoint indicates that 70% of organizations lack optimized AI governance, and 50% expect AI-related data loss within 12 months. The solution specifically targets risks like agentic privilege escalation and zero-click prompt injection attacks, where a single AI request can trigger dozens of autonomous actions across multiple systems at machine speed without human oversight.
CEO Sumit Dhawan emphasized that both humans and AI agents share similar risks — both can be manipulated and both can take actions that diverge from their intended purpose. The platform is designed to protect people, defend data, and govern AI agents together.
This is particularly significant for developer environments where agent-connected coding assistants, plugins, and MCP-integrated tools are accelerating adoption and increasing the need for visibility and policy enforcement.
Sources
🧠 Stay Updated on AI Agents
Get weekly insights on agentic AI, networks and infrastructure. No spam.
Join 500+ AI builders. Unsubscribe anytime.