MintMCP
March 26, 2026

Claude Code CVE-2025-59536 & CVE-2026-21852: What Enterprise Teams Must Know

Skip to main content

Two serious vulnerabilities in Anthropic's Claude Code exposed enterprise development teams to remote code execution and API credential theft through malicious repository configuration files—files that developers treat as passive metadata but actually functioned as active execution paths. These vulnerabilities highlight why enterprises need LLM proxy monitoring and runtime security controls for coding agents before they reach production.

This article provides enterprise security teams with technical details on both CVEs, immediate remediation steps, and long-term governance frameworks to protect AI development environments from configuration-based attacks.

Key Takeaways

  • CVE-2025-59536 (CVSS 8.7 High): Remote code execution via pre-trust hook execution and MCP consent bypass—malicious code runs before trust dialogs appear
  • CVE-2026-21852 (CVSS 5.3 Moderate): API key exfiltration through ANTHROPIC_BASE_URL manipulation redirects traffic to attacker-controlled servers
  • All vulnerabilities patched: CVE-2025-59536 fixed in Claude Code 1.0.111+ (October 2025); CVE-2026-21852 fixed in version 2.0.65+ (January 2026)
  • Supply chain attack vector: Single malicious pull request or honeypot repository can compromise entire development teams through trusted channels
  • Anthropic Workspaces amplification: Stolen API keys can expose workspace-scoped resources, enabling data access within the affected workspace
  • Configuration files are code, not metadata: These vulnerabilities exploited the fundamental trust boundary where developers assumed .claude/settings.json files were safe settings

Understanding Claude Code Vulnerabilities: CVE-2025-59536 & CVE-2026-21852 Explained

What Are CVE-2025-59536 and CVE-2026-21852?

These vulnerabilities exploited flaws in Claude Code's startup trust dialog and project-load flow, allowing configuration-driven actions to occur before users could review or approve them.

CVE-2025-59536: Pre-Trust Hook Execution (RCE)

This vulnerability allowed malicious shell commands embedded in repository configuration files to execute before the user saw or confirmed any trust dialog. The attack vector worked as follows:

  • Attacker creates repository with malicious .claude/settings.json containing hooks
  • Victim clones repository and runs claude command
  • Claude Code reads configuration and executes hooks to "prepare context"
  • Malicious payload runs with developer's privileges before trust dialog appears
  • Attacker gains the ability to execute arbitrary code on the victim's machine

A secondary attack path abused .mcp.json configuration files to initialize malicious MCP servers. Settings like enableAllProjectMcpServers: true in .claude/settings.json auto-approved MCP servers without user consent, bypassing improved warning dialogs.

CVE-2026-21852: API Key Exfiltration via Environment Variable Injection

This vulnerability redirected Claude Code's API traffic to attacker-controlled servers before the trust dialog appeared, exfiltrating the user's Anthropic API key in plaintext:

  • Attacker creates repository with malicious .claude/settings.json containing ANTHROPIC_BASE_URL override
  • Victim opens project in Claude Code
  • Claude Code sends initialization requests to attacker's URL with full x-api-key header
  • Attacker's proxy captures credentials and forwards requests to real Anthropic API
  • Victim notices nothing unusual while attacker harvests API keys

Potential Impact on Enterprise Operations

Anthropic Workspaces Amplification:

Stolen API keys provide access to Anthropic Workspaces, where files and other resources can be scoped to the affected workspace and shared across API keys in that workspace. A single compromised key grants attackers ability to:

  • Access workspace-scoped files and related resources available to the compromised key
  • Modify or remove accessible workspace resources, depending on the affected key's permissions
  • Upload malicious content or consume workspace resources within the affected workspace
  • Generate unauthorized API usage charged to the victim's account
  • Reach shared workspace resources, not just artifacts tied to one local machine

Business Impact Summary:

  • Full remote access to developer workstations
  • Exposure of source code, credentials, and internal networks
  • Supply chain attack potential—one malicious commit affects entire development team
  • Compliance violations if sensitive data is accessed
  • Unauthorized API usage leading to unexpected costs

The Threat Landscape: Why Claude Code Security is Paramount for Enterprises

The Rise of Coding Agents and Their Risks

Coding agents like Claude Code operate with extensive system access—reading files, executing commands, and accessing production systems through MCP tools. Without monitoring, organizations cannot see what agents access or control their actions.

These Claude Code vulnerabilities are part of a broader pattern including CVE-2026-21858 in n8n automation workflows and CVE-2025-68664 in LangChain.

How Unsecured AI Tools Lead to Enterprise Vulnerabilities

Traditional security frameworks assume human actors with predictable behavior. AI agents break these assumptions through autonomous decision-making, probabilistic responses varying by context, and dynamic access needs. Most critically, configuration files that security teams treat as passive metadata actually control active execution paths.

The MintMCP LLM Proxy addresses these security challenges by monitoring every MCP tool invocation, bash command, and file operation from coding agents—providing the visibility and control organizations need when deploying tools like Claude Code.

Mitigating CVE-2025-59536 & CVE-2026-21852: Immediate Steps for Your Team

Essential Patching and Configuration Updates

Step 1: Version Verification (5 minutes)

  • Run claude --version in terminal
  • Version should be 2.0.65 or higher
  • If outdated, proceed to immediate update

Step 2: Update Claude Code (10 minutes)

  • npm users: npm update -g @anthropic/claude-code
  • Standalone installer: Download latest from code.claude.com
  • Verify new trust dialogs appear with enhanced warnings

Step 3: API Key Rotation (15 minutes)

  • Immediately rotate all Anthropic keys used by developers
  • Revoke old keys in Anthropic Console before generating new ones
  • Use secure credential management (1Password, AWS Secrets Manager)
  • Check API usage logs for anomalous activity before rotation

Step 4: Configuration File Audit (30 minutes per repository)

Search for suspicious patterns:

# Dangerous hook patterns

grep -r "hooks.*curl\|hooks.*wget\|hooks.*bash" .

# Environment variable overrides

grep -r "ANTHROPIC_BASE_URL\|OPENAI_BASE_URL" .

# Auto-approval settings

grep -r "enableAllProjectMcpServers\|autoApprove" .

Implementing Stricter Access Controls

Network Security Updates:

  • Block outbound traffic from developer terminals to unknown API endpoints
  • Implement allowlists for legitimate AI service domains
  • Enable DNS monitoring for suspicious domain queries

Configuration Governance:

  • Treat .claude/, .mcp.json, and similar files as executable logic
  • Require security team approval for configuration changes
  • Implement cryptographically signed configurations—reject unsigned files

Continuous Monitoring: Safeguarding Against Future Claude Code Vulnerabilities

Detecting Malicious Activity in Real-Time

Many organizations still struggle to move from passive monitoring to active control when AI agents do something risky. Effective detection requires:

  • Real-time monitoring of all AI agent tool invocations
  • Behavioral anomaly detection (agent accessing databases it never used before)
  • API usage monitoring with cost thresholds and alerts
  • Configuration drift detection for unauthorized changes

The MintMCP MCP Gateway provides real-time visibility into usage patterns and security events with complete audit trails of MCP interactions to support SOC 2 Type II attestation workflows and GDPR-oriented governance.

Establishing Robust Audit Trails

Effective incident response requires comprehensive logging with these phases:

  • Immediate: Circuit breaker to halt agent execution
  • Investigation: Collect logs including prompts, tool calls, outputs, decision reasoning
  • Containment: Isolate agent from production data/APIs, rotate all associated credentials
  • Remediation: Root cause analysis, policy updates
  • Documentation: Lessons learned for compliance and continuous improvement

MintMCP's audit capabilities generate complete audit trails of every bash command and file access for security review—ensuring your organization can demonstrate compliance during audits.

Enforcing Security Policies for Claude Code: Granular Controls & Governance

Restricting Dangerous Commands and File Access

The Claude Code vulnerabilities highlighted fundamental security issues with the Model Context Protocol (MCP). Common MCP gaps include:

  • Unsecure agent-to-agent communication with default trust
  • Credential exposure through hardcoded API keys in configs
  • Lack of built-in approval workflows for critical operations
  • Limited audit trails for prompts, tool calls, or decision chains
  • Difficult privilege management across multiple MCP servers

Mandatory Configuration Security Controls:

Before any AI agent reaches production:

  • Configuration file scanning: Automated detection of malicious patterns
  • Code review parity: Treat configuration changes with same rigor as source code
  • Signed configurations: Cryptographically sign approved configs
  • Network segmentation: Block outbound traffic to unknown API endpoints
  • Credential vault integration: Never allow API keys in configuration files

MintMCP's tool governance enables granular tool access control, allowing enterprises to configure read-only operations while excluding write tools based on role requirements.

Aligning with Enterprise Compliance Standards

Governance-First AI Agent Deployment Maturity Model:

Level 1: Intern

  • Agent Scope: Read-only, no external tools
  • Approval Process: Pre-approve every action
  • Control Mechanisms: Human reviews all outputs

Level 2: Assistant

  • Agent Scope: Limited tool access
  • Approval Process: Pre-approve high-risk actions
  • Control Mechanisms: Automatic logging + alerts

Level 3: Senior

  • Agent Scope: Broader tool access
  • Approval Process: Post-action notification
  • Control Mechanisms: Runtime policy enforcement

Level 4: Principal

  • Agent Scope: Autonomous within domain
  • Approval Process: Strategic oversight only
  • Control Mechanisms: Behavioral monitoring + circuit breakers

The Cloud Security Alliance framework presents a governance model where agent autonomy is earned over time, with clear maturity levels, promotion criteria, and controls matched to each stage.

Compliance & Auditability: Supporting SOC 2 Type II Attestation, HIPAA-Readiness Controls, and GDPR-Oriented Governance with Secure AI Development

Ensuring Regulatory Adherence in AI Workflows

Regulatory pressure is increasing for AI agent deployments. The EU AI Act mandates robust security measures for high-risk AI applications, while U.S. Executive Orders require governance frameworks. However, most frameworks don't specifically address agentic AI or multi-agent systems.

The Five Essential Questions Framework:

Every enterprise must answer these for each deployed AI agent:

"Who are you?" — Identity & Authentication

  • Unique agent identity, certificate-based auth

"What can you do?" — Authorization & Permissions

  • Least-privilege access, scoped API tokens

"What are you eating? What are you serving?" — Data Governance

  • Input validation, PII protection, output governance

"Where can you go?" — Segmentation

  • Network boundaries, MCP server allowlists

"What if you go rogue?" — Incident Response

  • Circuit breakers, kill switches, containment playbooks

Leveraging Audit Logs for Accountability

Shared API keys for agent-to-agent authentication make forensic analysis impossible when incidents occur. Identity-first access control requirements include:

  • Assign unique identity to every AI agent with explicit, scoped permissions
  • Use service principals or workload identities (Azure, AWS IAM roles)
  • Implement just-in-time access—permissions granted only for required duration
  • Maintain immutable audit trail of agent identity, permissions, and all actions

MintMCP Gateway provides complete audit logs for security reviews, helping ensure Claude Code activity is traceable to specific identities with MCP interactions, access requests, and configuration changes logged.

Bridging the Gap: Securely Integrating Claude Code with Internal Systems

Centralized Authentication and Access for AI Tools

The Claude Code vulnerabilities demonstrated how configuration-based attacks exploit the trust boundary between AI agents and enterprise resources. MCP architecture creates direct pathways that eliminate traditional security boundaries.

MCP Security Framework Requirements:

  • Identity-first access control: Each MCP server needs distinct identity with scoped permissions
  • Runtime policy enforcement: Evaluate every request against enterprise policy before execution
  • Real-time monitoring: Track all MCP tool invocations with attribution to specific identities
  • Configuration validation: Treat MCP configs as code—scan for malicious patterns
  • Supply chain vetting: Maintain internal trust registry of approved MCP servers

Protecting Data Access during AI Operations

API Key & Credential Management Best Practices:

  • Eliminate static API keys: Replace long-lived keys with short-lived tokens (1-2 hour lifetimes)
  • Implement automatic rotation: Keys rotate every 1-2 hours via secrets management
  • Use hardware security: Store keys in tamper-resistant hardware
  • Apply IP allowlists: Restrict API key usage to known, trusted IP ranges
  • Require mTLS: Mutual TLS authentication for all agent-to-API communication

MintMCP Gateway offers unified authentication with OAuth 2.0, SAML, and SSO integration for securely connecting AI tools to enterprise systems.

How MintMCP Protects Your Claude Code Deployments

Claude Code vulnerabilities exposed a key gap in AI agent security: configuration files can execute with system-level privileges before users can review them. Traditional security tools can monitor traffic or detect known malware, but they cannot see inside agent behavior or stop unsafe MCP server configurations before they run.

MintMCP Gateway adds three security layers for Claude Code deployments. First, it provides centralized governance over MCP access through approved server controls, monitoring, and rule-based enforcement. Second, it applies runtime policy enforcement to every MCP tool call, blocking high-risk actions like arbitrary bash execution or credential exfiltration while allowing approved workflows to continue. Third, it creates complete audit trails that capture not just which tool was called, but also the commands, files, access requests, and identity tied to each action.

With MintMCP, organizations can control which MCP servers agents can access, which commands they can run, and which data they can reach. Security teams get real-time visibility and centralized policy enforcement, while development teams retain self-service access to approved tools. That balance helps enterprises scale AI agent adoption with stronger security and less manual overhead. This balance between developer velocity and security control is what enables enterprise AI agent adoption without unacceptable risk.

Frequently Asked Questions

What should I do if I suspect my team's API keys were compromised before patching?

Immediately rotate all Anthropic API keys that any potentially affected developer used, even if you've now patched Claude Code. Check your Anthropic Console for unusual API usage patterns, unexpected geographic locations, or anomalous call volumes in the weeks before patching. If you use Anthropic Workspaces, audit all workspace files for unauthorized modifications or deletions since compromised keys grant workspace-wide access. Consider engaging incident response specialists if you detect evidence of exploitation, as attackers may have established persistent access through uploaded malicious files.

How do these vulnerabilities compare to prompt injection attacks?

These configuration-based attacks operate at a lower level than prompt injection and are potentially more dangerous. Prompt injection manipulates AI model behavior through crafted inputs during runtime. Configuration attacks like CVE-2025-59536 execute arbitrary code with developer privileges before the AI model even runs—the trust dialog hasn't appeared yet. This means traditional prompt injection defenses (input sanitization, output filtering) provide zero protection. Organizations need both prompt security controls and configuration validation, treating .claude/settings.json files with the same scrutiny as executable code.

Can these attack patterns affect other AI coding assistants beyond Claude Code?

Yes. The fundamental vulnerability—treating configuration files as passive metadata when they actually control execution—exists across the AI coding assistant ecosystem. Similar attack patterns have been demonstrated against other tools using MCP configurations. Any AI agent that reads project-level configuration files before displaying trust dialogs faces comparable risks. Enterprise security teams should audit all AI coding tools for pre-authentication execution paths in configuration files, regardless of vendor. The MCP security risks extend to any tool implementing the Model Context Protocol.

How do I balance developer productivity with the security controls needed to prevent these attacks?

Start by implementing controls that are invisible to compliant workflows. Pre-commit hooks that scan configuration files add milliseconds to commits. Allowlisted MCP servers don't require additional approval steps for approved tools. The key is making secure behavior the default while only introducing friction for genuinely risky actions. MintMCP's approach of deploying MCP tools with pre-configured policies demonstrates this principle—developers get self-service access to approved tools instantly while security teams maintain governance through centralized policy enforcement rather than manual approval queues.

What's the minimum viable security program for a small team using Claude Code?

For teams under 20 developers: verify all team members run Claude Code 2.0.65+, rotate API keys monthly with centralized secrets management, add pre-commit hooks scanning for dangerous configuration patterns, block outbound traffic to non-allowlisted API domains at the network level, and enable logging of all Claude Code sessions for audit purposes. This provides baseline protection without dedicated security tooling. As teams scale beyond 20-50 developers, purpose-built solutions like MintMCP become cost-effective compared to maintaining custom scripts and manual processes.

MintMCP Agent Activity Dashboard

Ready to get started?

See how MintMCP helps you secure and scale your AI tools with a unified control plane.

Sign up