🏗️ AI Infrastructure

GitGuardian State of Secrets Sprawl 2026: 28.65 Million Hardcoded Secrets Leaked on GitHub in 2025 — AI Tools and MCP Config Files Identified as Fastest-Growing Credential Leak Vectors

3 min read2 views

GitGuardian published its State of Secrets Sprawl 2026 report on March 27, 2026, revealing a staggering 28.65 million new hardcoded secrets in public GitHub commits during 2025. The report identifies AI development workflows and agent frameworks as the fastest-growing category of credential leaks.

KEY FINDINGS:

  1. SCALE OF THE PROBLEM:
  • 28.65 million new hardcoded secrets exposed in public GitHub commits in 2025
  • Multi-year upward trend continues unabated
  • Internal repositories show even higher rates of secret exposure than public repos
  • Leaked credentials directly tied to production systems and operational access
  1. AI IS MAKING IT WORSE:
  • AI development projects connect to model providers, orchestration layers, retrieval systems, and agent frameworks — each requiring its own authentication
  • The fastest-growing categories of leaked secrets tie directly to AI services
  • AI-assisted coding (Copilot, Claude Code, etc.) produces code with a higher rate of exposed secrets than manually-written code
  • As GitGuardian states: "AI makes it easier and faster to build, integrate, and ship. But every new tool, API, workflow, agent, and service account also creates new credentials to manage and more surface area for attackers to target"
  1. MCP CONFIG FILES — NEW LEAK VECTOR:
  • Credentials appearing in MCP (Model Context Protocol) configuration files — a new category specific to AI agent deployments
  • MCP config files often remain close to application logic and can be committed alongside code
  • As MCP becomes the standard interoperability layer for AI agents (now adopted by OpenAI Codex, Claude Code, and Gemini CLI), this vector will grow
  1. COLLABORATION TOOLS AS LEAK CHANNELS:
  • Slack, Jira, and Confluence contain credentials shared during troubleshooting or routine coordination
  • These entries can grant immediate access to production systems
  • Outside standard scanning processes, so sensitive data accumulates over time
  1. SELF-HOSTED INFRASTRUCTURE:
  • GitLab instances and Docker registries exposed on the internet contain large volumes of credentials
  • Operate outside standard scanning processes
  • Secrets can remain usable after discovery
  1. PERSISTENCE PROBLEM:
  • Leaked credentials remain active for years after initial exposure
  • Remediation requires changes across codebases, deployment pipelines, and shared configurations
  • Dependencies slow down credential rotation
  • Prioritization methods relying only on validation status leave part of the risk unaddressed

AI AGENT IMPLICATIONS: The report creates a direct connection between the explosion of AI agent deployments and secrets sprawl. Each agent connection to an external service requires authentication. An enterprise running multiple AI agents — each connecting to LLM APIs, databases, SaaS tools via MCP, and internal systems — dramatically multiplies the credential surface area.

The report notes that "when organizations scale creation faster than governance, secrets begin to spread everywhere." This is precisely what is happening with enterprise AI agent deployments, where 85% of enterprises are experimenting with agents (per Cisco at RSAC 2026) but security governance lags far behind.

TIMING: The report arrives alongside the LiteLLM supply chain attack (March 24) where stolen AI proxy credentials were the primary target, and the RSAC 2026 consensus that AI infrastructure is the fastest-growing attack surface. Together, they paint a picture of an AI ecosystem where credentials are proliferating faster than security teams can track them.

Share this article

🧠 Stay Updated on AI Agents

Get weekly insights on agentic AI, networks and infrastructure. No spam.

Join 500+ AI builders. Unsubscribe anytime.

Deploy Your AI Agent Today

Launch a managed OpenClaw instance in minutes

Request demo →

More from AI Infrastructure