Kit includes three GitHub Actions workflows that automate the entire path from code change to production deployment. Every push runs type checking, linting, unit tests, and E2E smoke tests. Merges to
master trigger a full deployment to Vercel.Workflow Overview
| Workflow | File | Trigger | Purpose |
|---|---|---|---|
| Deploy to Production | deploy.yml | Push to master, manual | Full test suite + Vercel deployment |
| Test | test.yml | PRs to master, push to develop | Quality gate for pull requests |
| Full E2E Suite | e2e-full.yml | Daily at 2 AM UTC, manual | Comprehensive nightly regression testing |
All three workflows use Turborepo remote caching to speed up builds across runs. Cache hits can reduce build times by 50-70%.
Production Deployment Workflow
The
deploy.yml workflow is your production pipeline. It runs on every push to master and can also be triggered manually from the GitHub Actions tab.Pipeline Structure
The workflow has two sequential jobs:
test (Job 1) deploy (Job 2)
├── Install dependencies ├── Install dependencies
├── Validate customer files ├── Check required secrets
├── Setup test database ├── Pull Vercel environment
├── Generate Prisma client ├── Build with Vercel CLI
├── Type checking ├── Deploy to production
├── Linting ├── Comment on commit
├── Unit tests └── Create deployment status
├── Build application
└── E2E smoke tests
The deploy job only runs if all tests pass. This ensures broken code never reaches production.
Test Job
The test job runs against a real PostgreSQL database with the pgvector extension (matching your production Supabase setup):
deploy.yml — Test Job with PostgreSQL Service
test:
name: Run Tests
runs-on: ubuntu-latest
services:
postgres:
image: pgvector/pgvector:pg15
env:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_DB: testdb
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
ports:
- 5432:5432
steps:
- uses: actions/checkout@v4
- uses: pnpm/action-setup@v3
with:
version: 10.15.1
The PostgreSQL service container uses
pgvector/pgvector:pg15, which provides the same vector extension you use in production for RAG features. This ensures database queries involving vector operations are tested against real PostgreSQL behavior.Customer File Validation
Before running tests, the workflow validates that auto-generated customer files are up-to-date:
deploy.yml — Customer File Validation (env.test)
- name: Validate customer .env.test is up-to-date
run: |
echo "Validating customer .env.test generation..."
# Check if generated file exists
if [ ! -f "apps/boilerplate/.env.test" ]; then
echo "❌ Error: apps/boilerplate/.env.test does not exist"
echo "This file should be generated and committed to git"
echo "Run: cd apps/boilerplate && pnpm generate:env-test"
exit 1
fi
# Compare files (must be identical - no transformations)
if ! diff -q .env.test apps/boilerplate/.env.test > /dev/null 2>&1; then
echo "❌ Error: apps/boilerplate/.env.test is outdated"
echo ""
echo "The root .env.test was updated but the generated customer version wasn't."
echo "Please run: cd apps/boilerplate && pnpm generate:env-test"
echo "Then commit the updated apps/boilerplate/.env.test"
echo ""
echo "Differences:"
diff .env.test apps/boilerplate/.env.test || true
exit 1
fi
echo "✅ Customer .env.test is up-to-date"
The validation checks four customer-critical file categories:
| Validation | What It Checks |
|---|---|
.env.test | Test environment template matches root |
CLAUDE.md | Generated customer CLAUDE.md is current |
.gitignore | Monorepo patterns removed in standalone version |
docs/ | Generated docs have no monorepo path references |
If any validation fails, the workflow exits with a clear error message explaining which file needs regeneration and the exact command to run.
Deploy Job
After tests pass, the deploy job builds and ships to Vercel:
deploy.yml — Vercel Build and Deploy
deploy:
name: Deploy to Vercel
needs: test
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Check Required Secrets
run: |
echo "Checking for required secrets..."
if [ -z "${{ secrets.VERCEL_TOKEN }}" ]; then
echo "❌ Error: VERCEL_TOKEN secret is not set"
echo "Please add VERCEL_TOKEN to your GitHub repository secrets"
echo "See docs/DEPLOYMENT_SETUP.md for instructions"
exit 1
fi
if [ -z "${{ secrets.VERCEL_ORG_ID }}" ]; then
echo "❌ Error: VERCEL_ORG_ID secret is not set"
echo "Please add VERCEL_ORG_ID to your GitHub repository secrets"
echo "Required value: team_Bgh9zp308TcrlrYLqGmuGLHN"
exit 1
fi
if [ -z "${{ secrets.VERCEL_PROJECT_ID }}" ]; then
echo "❌ Error: VERCEL_PROJECT_ID secret is not set"
echo "Please add VERCEL_PROJECT_ID to your GitHub repository secrets"
echo "Required value: prj_jf8vKRszL6Xt7IF8EwqxzfnwRXsl"
exit 1
fi
echo "✅ All required secrets are configured"
- uses: pnpm/action-setup@v3
with:
version: 10.15.1
- uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'pnpm'
- name: Install dependencies
run: pnpm install --frozen-lockfile
- name: Install Vercel CLI
run: pnpm add -g vercel@latest
- name: Pull Vercel Environment Information
run: |
echo "Pulling Vercel environment configuration..."
vercel pull --yes --environment=production --token=${{ secrets.VERCEL_TOKEN }}
echo "✅ Successfully pulled Vercel configuration"
- name: Build Project Artifacts
run: |
echo "Building boilerplate app for production..."
vercel build --prod --token=${{ secrets.VERCEL_TOKEN }}
echo "✅ Build completed successfully"
- name: Deploy to Vercel
id: deploy
run: |
echo "Deploying to Vercel production..."
# --archive=tgz compresses files before upload to avoid rate limits (>5000 files)
DEPLOYMENT_URL=$(vercel deploy --prebuilt --prod --archive=tgz --token=${{ secrets.VERCEL_TOKEN }})
echo "deployment-url=$DEPLOYMENT_URL" >> $GITHUB_OUTPUT
echo "✅ Successfully deployed to: $DEPLOYMENT_URL"
Key implementation details:
vercel pulldownloads the environment configuration from Vercel, ensuring the build uses the same variables as productionvercel build --prodbuilds the application locally in CI (faster than building on Vercel's servers)--archive=tgzcompresses the build output before uploading. This is critical for monorepo projects with 5,000+ files — without it, Vercel's API rate limits can cause upload failures- Commit comments provide direct links to the deployment URL for quick verification
Required GitHub Secrets
Configure these secrets in your GitHub repository under Settings > Secrets and variables > Actions:
| Secret | Purpose | Where to Find |
|---|---|---|
VERCEL_TOKEN | Authenticates Vercel CLI | Vercel Dashboard > Settings > Tokens |
VERCEL_ORG_ID | Identifies your Vercel team | Vercel Dashboard > Settings > General |
VERCEL_PROJECT_ID | Identifies your Vercel project | Vercel Dashboard > Project Settings > General |
TURBO_TOKEN | Turborepo remote cache auth | vercel.com/account/tokens |
TURBO_TEAM | Turborepo team identifier | Your Vercel team slug |
The deploy job checks for
VERCEL_TOKEN, VERCEL_ORG_ID, and VERCEL_PROJECT_ID before attempting deployment. Missing any of these causes an explicit failure with instructions on how to set them.PR Validation Workflow
The
test.yml workflow runs on every pull request targeting master and on pushes to develop. It acts as a quality gate — PRs cannot merge until all checks pass.PR Pipeline Structure
test (Single Job)
├── Install dependencies
├── Setup PostgreSQL + pgvector
├── Generate Prisma client
├── Push database schema
├── Type checking
├── Linting
├── Unit tests
├── Validate customer files
├── E2E smoke tests
└── Upload test artifacts
Test Artifact Upload
After every run, the workflow uploads test results with 30-day retention:
test.yml — Artifact Upload
- name: Upload test results
uses: actions/upload-artifact@v4
if: always()
with:
name: test-results
path: |
coverage/
playwright-report/
retention-days: 30
Artifacts include:
coverage/— Code coverage reports from Vitest (HTML, JSON, text)playwright-report/— E2E test results with screenshots and traces
Download these from the Actions > Artifacts section in GitHub when debugging failures.
Nightly E2E Suite
The
e2e-full.yml workflow runs the complete E2E test suite daily at 2:00 AM UTC. This catches regressions that smoke tests might miss.What It Runs
e2e-full.yml — Schedule and Trigger
name: Full E2E Test Suite
on:
# Run daily at 2:00 AM UTC
schedule:
- cron: '0 2 * * *'
# Allow manual trigger
workflow_dispatch:
| Aspect | Value |
|---|---|
| Schedule | Daily at 2:00 AM UTC |
| Tests | Full suite on Chromium |
| Timeout | 30 minutes |
| Quality gates | Type check + lint + unit tests + full E2E |
Automatic Issue Creation
When the nightly suite fails, the workflow automatically creates a GitHub issue with the failure details:
e2e-full.yml — Auto-Create Issue on Failure
- name: Create issue on failure
if: failure()
uses: actions/github-script@v7
with:
script: |
const title = '🔴 Full E2E Test Suite Failed';
const body = `
The scheduled full E2E test suite has failed.
**Workflow Run:** ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}
**Triggered by:** ${{ github.event_name }}
**Branch:** ${{ github.ref_name }}
**Commit:** ${{ github.sha }}
Please review the test results and fix the failing tests.
[View Playwright Report](${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }})
`;
// Check if there's already an open issue
const issues = await github.rest.issues.listForRepo({
owner: context.repo.owner,
repo: context.repo.repo,
state: 'open',
labels: 'e2e-test-failure'
});
if (issues.data.length === 0) {
await github.rest.issues.create({
owner: context.repo.owner,
repo: context.repo.repo,
title: title,
body: body,
labels: ['e2e-test-failure', 'automated']
});
}
The issue includes:
- Direct link to the failed workflow run
- Branch and commit SHA for debugging
- Link to the Playwright report artifact
- Labels (
e2e-test-failure,automated) for filtering
Only one issue is created — if an open issue with the
e2e-test-failure label already exists, the workflow skips creation to avoid duplicates.You can run the full E2E suite on demand from the Actions tab using the Run workflow button. This is useful before major releases or after significant refactoring.
Quality Gates Summary
Every code path must pass these gates before reaching production:
| Gate | Workflow | Blocks |
|---|---|---|
| TypeScript compilation | deploy.yml, test.yml | Type errors |
| ESLint | deploy.yml, test.yml | Code quality violations |
| Unit tests (800+) | deploy.yml, test.yml | Logic regressions |
| E2E smoke tests (19) | deploy.yml, test.yml | Critical flow breakage |
| Customer file validation | deploy.yml, test.yml | Out-of-date generated files |
| Full E2E suite (186) | e2e-full.yml | Comprehensive regressions |
Turborepo Remote Caching
All workflows use Turborepo remote caching to share build artifacts across CI runs. This is configured via two environment variables:
yaml
env:
TURBO_TOKEN: ${{ secrets.TURBO_TOKEN }}
TURBO_TEAM: ${{ secrets.TURBO_TEAM }}
When a previous run already built the same code, Turborepo downloads the cached result instead of rebuilding. This provides the biggest savings for:
- Type checking — TypeScript compilation results are cached
- Linting — ESLint results are cached
- Building — Next.js build output is cached
Typical cache hit rates are 60-80% for PR validation workflows. Production deploys usually rebuild everything since they run after merges bring in new code.
Adding Custom Workflow Steps
To add your own steps to the pipeline, edit the workflow files in
.github/workflows/. Common additions:Add a Lighthouse audit
yaml
- name: Run Lighthouse CI
uses: treosh/lighthouse-ci-action@v11
with:
urls: |
${{ steps.deploy.outputs.deployment-url }}
budgetPath: ./lighthouse-budget.json
uploadArtifacts: true
Add database migration
yaml
- name: Run database migrations
run: pnpm --filter=@nextsaas/boilerplate prisma migrate deploy
env:
DATABASE_URL: ${{ secrets.DATABASE_URL }}
Add Slack notification
yaml
- name: Notify Slack
if: success()
uses: slackapi/slack-github-action@v1
with:
payload: |
{"text": "Deployed to production: ${{ steps.deploy.outputs.deployment-url }}"}
env:
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL }}
Troubleshooting
Tests pass locally but fail in CI
The most common causes are:
- Missing environment variables — CI uses
apps/boilerplate/.env.testwhile local usesapps/boilerplate/.env.local. Ensure all variables needed for tests are in.env.test - Timing issues — CI runners are slower than local machines. If tests use hardcoded timeouts, increase them or use Playwright's built-in
waitForpatterns - Database state — CI pushes a fresh schema with
prisma db push. Ensure your tests don't depend on seed data that only exists locally
Cache-related build failures
If a build fails with unexpected errors after a dependency update:
bash
# In your workflow, add before install:
- name: Clear Turborepo cache
run: pnpm turbo run build --force
This bypasses the remote cache for a single run. If the build succeeds, the old cache was stale.
Deployment succeeds but site shows errors
- Check that all environment variables are set in Vercel's Production scope
- Verify database migrations have been applied:
prisma migrate deploy - Check Vercel Functions logs for runtime errors