Back to Blog

The Vercel Breach: When Your AI Assistant Is the Supply Chain

By AutoSmoke Team

Trace the chain backwards and it gets absurd fast: a user downloaded a Roblox exploit script. That script carried Lumma Stealer. The stealer harvested credentials that let attackers pivot into an AI startup called Context.ai. Context.ai had a Google Workspace OAuth app with broad scopes. A Vercel employee had granted that app access. From there, the attackers were inside Vercel's internal systems.

This is the April 2026 Vercel breach—and it's a story about OAuth trust boundaries as much as it is about Vercel.

A laptop in a darkened room with cascading green Matrix-style code glowing on its screen

What Happened#

On April 19–20, 2026, Vercel disclosed that attackers had gained unauthorized access to certain internal systems and contacted "a limited subset of customers" directly. The data exposed in that subset was narrow in one respect and alarming in another: attackers could read non-sensitive environment variables in plaintext. Sensitive environment variables—those stored with read-prevention—were not exposed, and Vercel confirmed that its npm packages and the broader software supply chain were not compromised.

The threat actor claiming the intrusion operates under the ShinyHunters banner, reportedly seeking $2 million for the stolen data. Vercel has engaged Mandiant and additional incident-response firms, notified law enforcement, and shipped product changes in response (more on those below).

The Attack Chain#

Diagram of the five-stage Vercel breach attack chain: Roblox exploit script, credential theft, Context.ai OAuth app compromise, pivot into Vercel employee Workspace, lateral movement into Vercel internal systems

The intrusion is a textbook supply-chain story, but with an AI-era twist. Each link in the chain was probably considered "low-risk" on its own. Together, they formed a path straight to production.

  • February 2026 — Patient zero, far from Vercel. A user (unaffiliated with Vercel) downloaded a Roblox exploit script. The script bundled Lumma Stealer, which scraped browser sessions, tokens, and credentials from the infected machine.
  • Lateral drift into Context.ai. The stolen credentials gave attackers a foothold inside Context.ai, a third-party AI tool. Specifically, they compromised Context.ai's Google Workspace OAuth app—the integration that Context.ai customers use to authorize the product against their Workspace accounts.
  • OAuth pivot into a Vercel employee's Workspace. A Vercel employee had authorized Context.ai against their enterprise Google Workspace with broad ("Allow All") scopes. Because OAuth tokens ride on the vendor's OAuth app, compromising Context.ai was equivalent to compromising every token that app had ever been granted.
  • Lateral movement into Vercel internals. With a Vercel employee's Workspace account effectively taken over, attackers moved laterally into certain internal Vercel systems where they could read environment variables for a subset of customer projects.

For defenders, the indicator of compromise Vercel published is the Context.ai OAuth client ID:

110671459871-30f1spbu0hptbs60cb4vsmv79i7bbvqj.apps.googleusercontent.com

If your Workspace logs show that client ID, the exposure is potentially much broader than Vercel—reporting suggests the Context.ai compromise affected hundreds of downstream customers.

Why "Allow All" OAuth Is the Real Story#

Every time someone clicks Allow on an OAuth prompt that says "This app will be able to read and manage your email, files, and calendars," they are not granting a permission to a person. They are granting it to that vendor's infrastructure, employees, build pipeline, and incident-response posture—forever, until revoked.

When that vendor is a fast-moving AI startup that has existed for twelve months, you are effectively extending your corporate perimeter to include theirs. Your security is the minimum of your security and their security.

This is not a new idea—CI vendors, logging vendors, and Slack apps have lived with the same math for years. What changed in 2026 is how many AI tools have quietly become OAuth-privileged participants in production workflows, often granted the broadest possible scopes because narrowing them would break the product. A meeting-notes bot that "reads your calendar, email, drive, and chats" is indistinguishable from adversary-grade surveillance the moment its vendor is breached.

The Vercel incident is a clean proof of that failure mode: the breach didn't go through Vercel's code, Vercel's pipelines, or Vercel's customers. It went through a vendor that a single Vercel employee had authorized.

What Was (and Wasn't) Exposed#

The distinction Vercel is drawing between non-sensitive and sensitive environment variables deserves a careful read.

  • Non-sensitive variables are readable from the Vercel dashboard in plaintext. They are meant to hold things like feature flags, public URLs, and non-secret configuration. In practice, teams regularly put API keys, database URLs, signing keys, and third-party tokens there—because the "sensitive" flag was not the default.
  • Sensitive variables are encrypted with read-prevention, meaning they cannot be retrieved in plaintext after being set—only referenced at build or runtime. These were not exposed.

"Non-sensitive" never meant "low-value." It meant "readable." That's a subtle but costly distinction if the people reading are attackers.

If You Operate a Vercel Project, Do This Today#

Vercel's guidance is specific. If you haven't worked through it yet, treat the following as a checklist:

1. Turn On MFA for Every Member of Every Team#

Authenticator apps or passkeys—not SMS. This is table stakes, but the breach is a reminder that an enterprise Workspace compromise will try to pivot into every connected SaaS.

2. Rotate Every Non-Sensitive Environment Variable#

API keys, database credentials, signing keys, third-party tokens, webhook secrets, OAuth client secrets—if it lives in a non-sensitive env var on an affected project, assume plaintext exposure and rotate. Do this before deleting or archiving projects, not after.

3. Flip the Sensitive Flag Going Forward#

Vercel has now shipped sensitive: on as the default for new environment variables, along with improved team-wide variable management. For existing variables, go back and turn the flag on for anything that shouldn't be readable from the dashboard.

4. Audit Activity Logs and Recent Deployments#

Look for unfamiliar IP addresses, team invites you didn't authorize, and deployments from branches you don't recognize. Vercel has also enhanced activity logging as part of this response—use it.

5. Rotate Deployment Protection Tokens#

If your project uses Deployment Protection, set it to at least "Standard" and rotate any tokens. A protection token that leaked is a bypass key to preview environments.

6. Audit Your Google Workspace OAuth App Inventory#

Whether or not you use Vercel, go into your Workspace admin console and look at the full list of third-party OAuth apps authorized by users in your org. For each, ask: does this vendor need the scopes we granted, and what happens if they get breached? Revoke aggressively. Most "Allow All" approvals are approvals by default, not by design.

The Broader Lesson: AI Tools Are Now Supply-Chain Dependencies#

Every time your team authorizes a new AI product against a shared workspace, you are adding a supply-chain edge. That edge has all the same failure modes as any other vendor relationship—credential theft, insider risk, social engineering, malware—but it usually comes with broader scopes and less scrutiny, because AI tools demand access to content to be useful at all.

Context.ai didn't fail spectacularly. It failed in exactly the way any SaaS vendor fails: an upstream machine got infected, credentials moved, an OAuth app became an entry point. The surprise isn't that it happened—it's how many downstream perimeters expanded without anyone noticing.

The work for platform and security teams in 2026 is mundane and unglamorous: inventory OAuth grants, narrow scopes, require MFA on everything, and assume that every AI vendor you trust today will eventually have a bad day.

After the Rotation Comes the Silent Breakage#

There is a second-order risk from this incident that doesn't show up in any bulletin: when a team rotates dozens of secrets under pressure, something usually breaks silently. A worker picks up the old key and starts 500-ing. A webhook endpoint that was verified with the old signing secret starts rejecting legitimate traffic. A build succeeds because the env var loads, but the application can no longer reach its database.

The failure isn't the rotation. It's the delta between "rotated" and "verified in production."


At AutoSmoke, we build automated smoke tests that run against your production deployments—including the moments right after you've rotated secrets under incident pressure. If the Vercel breach triggered a mass rotation on your team, the next thing worth confirming is that your app still actually works. Learn how automated smoke testing fits into your incident response workflow.