Back to Blog

After TanStack: A Practical Defense Against the Next npm Supply Chain Attack

By AutoSmoke Team

A recurring story has been working its way through engineering Slack channels for the better part of a year: a popular JavaScript package compromised, a malicious version briefly live on npm, developer machines and CI runners pillaged before the package is yanked. TanStack — the family of widely-used React libraries behind Query, Router, Table, and Form — became the latest name on that list. By the time the bad versions were pulled, every CI job that had run an unpinned npm install in the affected window had already executed attacker code.

It was not the first time. It will not be the last.

A close-up of a chain of metal links rendered in cool blue light, evoking a software supply chain

The pattern that keeps working#

The specifics differ between incidents. The shape does not.

  • September 2025 — Shai-Hulud. A self-replicating worm published trojanized versions of dozens of npm packages, hopping from one maintainer's stolen token to the next. It infected hundreds of packages before npm staff caught up.
  • August 2025 — Nx CLI. Malicious versions of nx shipped a postinstall script that scanned for cloud credentials and SSH keys and exfiltrated them. The bad versions were live for hours; the rotation work for affected teams ran for weeks.
  • The maintainer-phishing wave. Through the second half of 2025, attackers sent convincing "your npm account is suspended" emails to maintainers of high-traffic packages. A surprising number of tokens were given up. Each one became a publishing key.
  • TanStack. The same shape. A maintainer compromise, a malicious version, an install-time payload, a window measured in hours.

What unifies these is not the cleverness of the payload. It is how mundane the attack surface is.

A maintainer's npm token gets phished, exfiltrated by malware on their laptop, or lifted from a leaked CI log. A malicious version goes up. Every project with a floating range — "^1.2.3", "~2.0.0", even a fresh npm install against an old lockfile that has since been deleted — pulls it. A postinstall or prepare script runs arbitrary code as the current user. It reads ~/.aws/credentials, ~/.npmrc, ~/.docker/config.json, environment variables, GitHub tokens, anything readable. It posts to a URL. It exits cleanly. The build succeeds.

By the time the registry pulls the version, the secrets are already gone. What follows is rotation, audit, and the awkward conversation about why a CI runner had production credentials in the first place.

Why npm is the soft target#

The Node ecosystem has three structural traits that make this kind of attack disproportionately effective.

Install-time code execution is the default. A postinstall script in any transitive dependency runs whenever you npm install. Most teams don't audit their own direct dependencies' install scripts, let alone the install scripts of the 1,400 packages in their lockfile. npm added --ignore-scripts, but almost nobody uses it because almost everyone has at least one dependency (native bindings, Husky, Playwright browsers) that genuinely needs it.

Floating versions are the default. npm init produces a package.json with ^ ranges. npm install <pkg> adds them. The lockfile pins the resolved version until something changes it — and CI environments that delete node_modules between runs, or use npm install instead of npm ci, can quietly upgrade across patch versions.

Maintainer tokens publish anything. An npm token authorized for @scope/* can publish any package in that scope, at any version, at any time. There is no second signer, no protected version line. The maintainer's laptop is the perimeter.

That last point is the one most teams underestimate. When you depend on a package, you are extending trust to every device, browser session, and credential store its maintainer has ever logged into. The 2025–2026 attacks are the natural outcome of an ecosystem where that trust is implicit and untested.

The playbook#

There is no single control that prevents this. There is a stack of mundane ones that, applied together, narrow the blast radius to something survivable. None of them are exotic. Most teams apply two or three.

1. Use npm ci, never npm install, in CI#

npm ci installs exactly what is in the lockfile and fails if package.json and the lockfile disagree. npm install resolves ranges fresh and can quietly pull a new version. The difference is whether a compromise that happens between yesterday's commit and today's build can reach your runner. Pin to the lockfile or do not pin at all.

2. Turn off install scripts where you can — and isolate where you can't#

npm ci --ignore-scripts is the single highest-leverage control. The teams who use it survived the recent waves with no rotation work because the malicious code never ran. The cost is real: anything with a genuine install step (Husky, Playwright, native modules) needs to be allowlisted explicitly. Tools like @lavamoat/allow-scripts and pnpm's onlyBuiltDependencies field formalize this — you maintain a short list of packages permitted to run scripts, and everything else is silenced.

If you can't turn scripts off, run installs in an isolated environment that has nothing worth stealing: no cloud credentials, no production tokens, no SSH keys, no long-lived OAuth sessions. A throwaway container with read-only access to the registry is dramatically less interesting to a postinstall payload than a developer's laptop.

3. Pin your direct dependencies, and audit before you upgrade#

Floating ranges in package.json were a 2010s convenience. In 2026 they are a foot-gun. Pin to exact versions ("1.2.3", no caret) for everything you list as a direct dependency. Use Renovate or Dependabot to propose upgrades as PRs — and treat the PR as a real review, with a glance at the changelog, the release author, and the publish date. A version published 30 minutes ago by a maintainer who hasn't published in six months is a signal worth pausing on.

4. Require npm 2FA and provenance, then verify it#

Enable mandatory 2FA for publish on your own packages — npm access set 2fa=publish — and for the registry-wide org policies you control. For dependencies you consume, prefer packages that ship with npm provenance attestations: cryptographic proof that the version on the registry came from a specific GitHub Actions workflow on a specific commit. Provenance is opt-in for publishers, and adoption is uneven, but for the packages that have it (@vercel/*, @sentry/*, react, and a growing list), it converts "trust the maintainer's laptop" into "trust the published workflow."

You can enforce this on consumption with npm audit signatures in CI. It fails the build if a package in your tree lacks a valid signature or provenance attestation from the registry.

5. Scope CI tokens like they will leak — because they will#

A CI job that runs untrusted dependency code has the privileges of whatever environment variables and OIDC tokens you handed it. The defense is not "make sure no dependency is malicious." It is "assume one will be, and limit what it can reach."

  • Use GitHub Actions OIDC to mint short-lived cloud credentials per job, not long-lived AWS_ACCESS_KEY_ID secrets in the repo.
  • Scope npm publish tokens to a single package or scope, with 2FA, with an expiry.
  • Separate the job that runs tests from the job that publishes. The test job sees source code; the publish job sees the token. They should not be the same shell session.
  • Treat GITHUB_TOKEN as a credential too. permissions: blocks default to contents: read — let them.

6. Watch for the IOCs after a compromise#

When a compromise is disclosed, the public advisories usually include the malicious package versions, the exfiltration domains, and sometimes the hashes of the dropped binaries. Pipe those into whatever endpoint or DNS monitoring you have and search backward. If a developer laptop or runner reached one of those domains, the rotation list is whatever was reachable from that machine — not whatever the advisory says was "exposed."

7. Lock down the developer laptop, not just the cloud#

The Shai-Hulud and Lumma Stealer-class attacks succeed because they harvest from local disk: .npmrc, .aws/credentials, browser cookie stores, SSH keys, password manager exports. Hardware-backed credential storage (1Password CLI with biometric unlock, gh auth with device flow, AWS SSO with short-lived sessions) makes a malicious postinstall measurably less productive. The goal is that an install-time payload running as your user finds nothing worth taking.

The thing your defenses won't catch#

Even with all of the above, something will get through eventually. The interesting question is what you do in the 30 minutes after an advisory drops.

You rotate. Loudly, urgently, under time pressure, often across dozens of services. And then — this is the part everyone forgets — you have to confirm that the application still works. A worker that picks up a stale key and starts 500-ing isn't visible in the rotation PR. A webhook verified with the old signing secret silently drops legitimate traffic. A build succeeds because the env var loads, but the integration it gated is broken.

The failure isn't the rotation. It's the delta between "rotated" and "verified in production." That gap is where supply chain incidents quietly become customer incidents.

The summary worth keeping#

Supply chain attacks don't get prevented by a single brilliant tool. They get prevented by a stack of small, boring controls — npm ci, ignored scripts, pinned versions, provenance checks, scoped tokens, isolated runners — applied consistently before the advisory hits your inbox. The teams that survived TanStack, Nx, and Shai-Hulud without rotation marathons are not the ones with the best detection. They are the ones whose runners had nothing worth stealing in the first place.

The next compromise is already published. The question is whether your build was set up assuming it.


At AutoSmoke, we run agentic smoke tests against your production deployment after every change — including the chaotic minutes after a forced credential rotation. If a supply chain advisory has your team rotating secrets right now, the next thing worth confirming is that your critical user flows still actually work. Get started free.