ayush
Tags

© 2026 Ayush Sharma. Built with care.

All posts
#security#git#vulnerability#github

A Single git push Was All It Took: CVE-2026-3854

Any authenticated user with push access to any repo on your GitHub Enterprise Server could RCE the server with one crafted push option. Here's exactly how.

May 8, 2026·9 min read
Dark cover with violet purple glow and white title text

I merged a branch last Thursday. Four commits, a small feature, pushed to a self-hosted GitHub Enterprise Server running 3.16.14, two versions behind the fix for CVE-2026-3854, though I didn't know that yet.

The advisory dropped on April 28. I checked my version. Updated immediately. But for the better part of eight weeks, anyone with an account on that instance could have executed arbitrary code on the server with a single git push.

Not a specially crafted binary. Not a kernel exploit. A push option with a semicolon in it.

CVE-2026-3854 (CVSS 8.7) was publicly disclosed April 28, 2026, fifty-five days after researchers at Wiz reported it. GitHub patched GitHub.com within two hours of the March 4 report. At public disclosure, roughly 88% of visible GitHub Enterprise Server instances were still on vulnerable versions.

Here's the mechanism, the chain, and (the part that actually matters) why this class of bug keeps appearing in internal infrastructure where nobody expects it.

babeld and the X-Stat header

GitHub's git infrastructure routes every push through a proxy called babeld. It sits between the git client and the backend services that handle repository storage, access policy, and pre-receive hook execution. Think of it as the traffic cop for every write that touches a repository.

Part of babeld's job is passing metadata downstream. When it receives a push, it assembles an internal HTTP header called X-Stat and forwards it to the services that need to make decisions: which hooks to run, whether the environment is sandboxed, where to load hook scripts from. A simplified version of that header looks like:

X-Stat: repo_id=12345;rails_env=production;user_id=9876;custom_hooks_dir=/opt/github/hooks

The format is straightforward: key-value pairs, separated by semicolons. The header is internal to GitHub's service mesh. No external client is supposed to write to it. The values are trusted.

That assumption is where things went wrong.

The unsanitized semicolon

Git has a feature called push options: client-supplied key-value strings you can attach to a push to signal things to server-side hooks:

git push --push-option="deploy=true"
git push --push-option="skip-ci=1" --push-option="env=staging"

They're a clean extension mechanism. CI systems use them to trigger pipelines. Deployment tooling uses them to parameterize release behavior. They're a normal part of git infrastructure.

When babeld processed a push, it extracted the push option values and embedded them into the X-Stat header to pass downstream. The values were not sanitized for the semicolon character before insertion, the same character the header format used as its field delimiter.

That's the entire bug. One sentence long.

A push option value of deploy=true;rails_env=development gets embedded and produces:

X-Stat: repo_id=12345;rails_env=production;push_option=deploy=true;rails_env=development

Downstream, the parser encounters two rails_env fields. The injected one wins. The attacker now controls trusted internal header fields that babeld was supposed to be the only author of.

Three injections, one shell

Wiz traced how that single primitive chains into full remote code execution. The escalation has three steps.

Step one: abandon the sandbox. The pre-receive binary (the component that runs server-side hooks before a push is accepted) has two execution paths, chosen entirely by the rails_env field. A production value routes hook execution through an isolated sandbox. Any other value (development, test, a random string) skips the sandbox and runs hooks directly as the git service user, with full filesystem access and no containment.

Inject rails_env=development. The binary leaves the sandbox. You are now running in the same process space as the git service user.

Step two: redirect the hook directory. A field called custom_hooks_dir tells the pre-receive binary where to load hook scripts from. Under normal circumstances, this points to a directory GitHub controls. Inject a value for this field pointing to a path the attacker can write to.

Step three: path traversal to execution. With the sandbox bypassed and the hook directory redirected, inject a hook definition containing a path-traversal sequence. The binary resolves the path, finds a script the attacker previously staged, and executes it.

Three semicolons in a push option string. Arbitrary code runs on the server handling the push.

One important detail: "push access to a repository" sounds like it requires compromising an existing account with write access to something valuable. It doesn't. On a GitHub Enterprise Server instance, any authenticated user can create their own repository. Push to that. The entire chain runs from a repository the attacker created three minutes ago. The blast radius for "who can exploit this" is effectively "everyone with a GHES account."

What was at stake

For GitHub Enterprise Server: the git service user has access to the repository storage for the instance. In most enterprise deployments, that's source code, CI pipeline secrets embedded in git history, and configuration that touches production infrastructure. RCE on the git server is rarely a dead end.

Patched versions are 3.14.25, 3.15.20, 3.16.16, 3.17.13, 3.18.8, 3.19.4, and 3.20.0+. If you're running anything older, stop reading and patch first.

For GitHub.com: the situation was materially worse in one specific way. GitHub.com runs a multi-tenant architecture where multiple organizations share backend storage infrastructure. Code execution on a storage node meant cross-tenant exposure: an attacker achieving RCE could read repository data belonging to other users on the same node. Wiz confirmed this blast radius. The word "millions" is accurate when describing how many repositories were potentially in scope.

GitHub patched GitHub.com in under two hours of the March 4 report. That's genuinely good incident response. The window was real, but it closed fast.

The pattern

Delimiter injection in trusted internal channels is not new. It's one of the oldest vulnerability classes in software. And it keeps appearing because the mental model most developers hold while writing internal service code is wrong in a specific, systematic way.

The mental model: internal means trusted, trusted means safe to parse naively.

The reality: "internal" describes the network path, not the data provenance. Every place where user-supplied data flows into an internal channel (headers, message queues, log entries, RPC fields, structured metadata) is a potential injection point, regardless of whether the channel is public-facing.

The same primitive surfaces in slightly different costumes:

  • CRLF injection: \r\n in user input terminates an HTTP response header, allowing an attacker to inject arbitrary response headers or split the response
  • Log injection: newlines in user-controlled values create fake log entries that spoof audit trails or fool SIEM parsers
  • Email header injection: \n in a From or Subject field enables arbitrary SMTP header insertion, useful for spam and phishing
  • SSRF via parser confusion: @ or # in a URL fools a validation step into approving a redirect that the actual parser handles differently

The structural invariant is always the same: a trusted channel uses a delimiter character; user input is embedded in that channel without stripping that character; downstream, a parser treats injected fields as authoritative.

What makes CVE-2026-3854 interesting isn't the injection: that's decades old. It's that it lived in a component millions of developers touch every day, in a pipeline nobody thinks of as user-facing. babeld isn't an API endpoint. It's infrastructure. The attack surface felt invisible, and that invisibility is what let the bug survive.

Public API surfaces get hardened. They have security reviews, fuzz testing, WAFs. Internal protocol boundaries accumulate debt quietly, because the people writing them are thinking about correctness and performance, not adversarial inputs. The assumption of internal trust is a security vulnerability waiting for a semicolon.

What to actually do

If you operate GHES: patch now. The advisory is public, the chain is documented, and exploitation requires nothing exotic. Waiting for the next maintenance window means being exploitable while attackers who read security blogs catch up.

If you write code that embeds user-controlled data into internal structured formats: audit it. Search for every place where push options, request parameters, query fields, or any other user-supplied value gets appended or interpolated into a semicolon-, pipe-, newline-, or colon-delimited format. The fix is not a generic sanitizer. A generic sanitizer won't know which characters matter. The fix is explicit stripping or escaping of the specific delimiter character used by the downstream parser.

For the X-Stat case, the fix is two lines: before embedding a push option value into the header, strip or percent-encode the semicolons. That's it. The vulnerability existed because nobody wrote those two lines.

If you're reviewing a system's security: trace user data from ingestion into internal protocol layers. Ask what delimiter characters those protocols treat as structural. Ask whether user-supplied values are filtered for those characters before embedding. The answers will usually be fine. Occasionally they won't be, and the impact will be proportional to how trusted the channel is.

The disclosure gap

One last thing worth sitting with.

GitHub received the report March 4. Fixed GitHub.com in two hours. Disclosed publicly April 28, fifty-five days later. Coordinated vulnerability disclosure: give self-hosted operators time to patch before making the exploitation path public knowledge.

At disclosure, 88% of visible GHES instances were still on vulnerable versions.

That's not a failure of coordinated disclosure. It's the structural reality of self-hosted software. Enterprise patch cycles take time. Testing takes time. Change control takes time. Fifty-five days disappears fast when it has to compete with everything else.

But here's what that timeline actually means: the window between "fixed" and "publicly disclosed" is the only period when a CVSS 8.7 vulnerability exists without a published exploit path. Once the advisory drops, the race is over. Researchers, pen testers, and threat actors all now know roughly where to look.

If you operate self-hosted infrastructure, your effective patch SLA isn't "before end of quarter." It's "before the CVE goes public." Those are different deadlines by weeks, sometimes months.

In this case, fifty-five days. Most teams will tell you they needed all of it. Most teams were still behind.

On this page

  • babeld and the X-Stat header
  • The unsanitized semicolon
  • Three injections, one shell
  • What was at stake
  • The pattern
  • What to actually do
  • The disclosure gap

Found this useful? Share it, or send a note.

PreviousCopy Fail: Nine Years in the Kernel, Zero Traces on DiskNext The Claude Code Config That Changed How I Work