OpenAI Codex Command Injection Vulnerability Exposed GitHub OAuth Tokens — BeyondTrust Phantom Labs Discovery

On March 30, 2026, BeyondTrust Phantom Labs published findings on a critical command injection vulnerability in OpenAI Codex — the cloud-based AI coding agent accessible through ChatGPT. The vulnerability allowed attackers to steal GitHub User Access Tokens by manipulating repository branch names during task creation.
Technical Details: When users prompted Codex to analyze a codebase, the platform transmitted an HTTP POST request containing the environment identifier and branch name. The backend system lacked adequate input sanitization on the branch name parameter, allowing threat actors to embed shell metacharacters directly into the branch designation.
Attack Vector:
- Attacker creates a malicious branch name containing command injection payload
- Victim uses Codex to analyze the repository
- The injection payload executes, outputting the Git remote URL and embedded OAuth token to a file
- Asking the Codex agent to read and return the file contents successfully exfiltrates the developer cleartext GitHub OAuth token
Scope of Impact: The vulnerability affected multiple Codex surface areas:
- Codex cloud (ChatGPT integration)
- Codex CLI
- Codex SDK
- IDE integrations
Sensitive materials at risk included OpenAI API keys, ID tokens, access tokens, refresh tokens, and associated account identifiers.
Scalability Concern: Adversaries could automate and scale this attack by generating obfuscated malicious branches within shared organizational repositories, targeting multiple developers across enterprise environments.
Remediation: OpenAI has fully remediated the vulnerability across all affected applications following responsible disclosure. BeyondTrust noted that organizations must treat AI execution containers as defined security boundaries, enforcing least privilege access protocols and continuously monitoring API logs.
This disclosure arrives alongside a separate ChatGPT vulnerability reported by Check Point Research, where a DNS-based covert channel in ChatGPT code execution runtime allowed data exfiltration without user knowledge — also now patched by OpenAI.
Sources
- ✓OpenAI Codex Command Injection Vulnerability Exposes GitHub Tokens — BeyondTrust Phantom Labs
- ✓OpenAI Patches ChatGPT Data Exfiltration Flaw and Codex GitHub Token Vulnerability — The Hacker News
- ✓Critical Vulnerability in OpenAI Codex Allowed GitHub Token Compromise — SecurityWeek
- ✓OpenAI Codex Vulnerability Exposes GitHub Credentials via Command Injection — TechNadu
🧠 Stay Updated on AI Agents
Get weekly insights on agentic AI, networks and infrastructure. No spam.
Join 500+ AI builders. Unsubscribe anytime.