Use case

AI writes your code.
Who's checking if it still works?

Cursor, Copilot, Claude Code — they ship diffs. They don't open a browser and click through signup. AutoSmoke does, after every vibe-coded change, and tells you what broke before your users do.

Run a Free Smoke Test

The pain

What breaks

AI diffs land before anyone clicks through

A one-shot refactor touches 12 files. The diff looks right, CI passes, you merge. Nobody opened the browser. The login page has been 404'ing for six hours.

You move faster than you can hand-test

You used to ship twice a week and click through the app. Now you ship twice an hour. Manual smoke testing is the bottleneck between you and the next prompt.

Writing Playwright defeats the point of AI coding

You picked AI coding to skip the boilerplate. Writing selector-based E2E tests — and fixing them when they drift — is the exact kind of grunt work you're trying to escape.

Example tests

What you'd test

Post-refactor smoke test

  1. 1Navigate to the home page
  2. 2Click 'Sign up'
  3. 3Fill in email and password
  4. 4Verify redirect to /dashboard

Core user journey

  1. 1Log in with test credentials
  2. 2Create a new project
  3. 3Verify the project appears in the list
  4. 4Open the project settings
  5. 5Verify settings page loads without errors

Payment flow after schema changes

  1. 1Navigate to /pricing
  2. 2Click 'Upgrade to Pro'
  3. 3Complete Stripe checkout with a test card
  4. 4Verify success page shows plan name

What you get

Included with every test

Runs on every push via GitHub Actions — no extra prompts
Write tests the same way you write prompts: plain English
No selectors to drift when your AI refactors the DOM
Full video replay shows exactly what the AI broke
Free to start — 20 runs/month, no credit card
Pairs with Cursor and Claude Code workflows out of the box

Your app changes daily.
Your guardrails should too.

Run a free smoke test now — and stop finding out from users.

Run a Free Smoke Test