CI/CD Pipeline

GitHub Actions workflows for automated testing, quality gates, and production deployment

Kit includes three GitHub Actions workflows that automate the entire path from code change to production deployment. Every push runs type checking, linting, unit tests, and E2E smoke tests. Merges to master trigger a full deployment to Vercel.

Workflow Overview

WorkflowFileTriggerPurpose
Deploy to Productiondeploy.ymlPush to master, manualFull test suite + Vercel deployment
Testtest.ymlPRs to master, push to developQuality gate for pull requests
Full E2E Suitee2e-full.ymlDaily at 2 AM UTC, manualComprehensive nightly regression testing
All three workflows use Turborepo remote caching to speed up builds across runs. Cache hits can reduce build times by 50-70%.

Production Deployment Workflow

The deploy.yml workflow is your production pipeline. It runs on every push to master and can also be triggered manually from the GitHub Actions tab.

Pipeline Structure

The workflow has two sequential jobs:
test (Job 1)                      deploy (Job 2)
  ├── Install dependencies          ├── Install dependencies
  ├── Validate customer files       ├── Check required secrets
  ├── Setup test database           ├── Pull Vercel environment
  ├── Generate Prisma client        ├── Build with Vercel CLI
  ├── Type checking                 ├── Deploy to production
  ├── Linting                       ├── Comment on commit
  ├── Unit tests                    └── Create deployment status
  ├── Build application
  └── E2E smoke tests
The deploy job only runs if all tests pass. This ensures broken code never reaches production.

Test Job

The test job runs against a real PostgreSQL database with the pgvector extension (matching your production Supabase setup):
deploy.yml — Test Job with PostgreSQL Service
test:
    name: Run Tests
    runs-on: ubuntu-latest

    services:
      postgres:
        image: pgvector/pgvector:pg15
        env:
          POSTGRES_USER: postgres
          POSTGRES_PASSWORD: postgres
          POSTGRES_DB: testdb
        options: >-
          --health-cmd pg_isready
          --health-interval 10s
          --health-timeout 5s
          --health-retries 5
        ports:
          - 5432:5432

    steps:
      - uses: actions/checkout@v4

      - uses: pnpm/action-setup@v3
        with:
          version: 10.15.1
The PostgreSQL service container uses pgvector/pgvector:pg15, which provides the same vector extension you use in production for RAG features. This ensures database queries involving vector operations are tested against real PostgreSQL behavior.

Customer File Validation

Before running tests, the workflow validates that auto-generated customer files are up-to-date:
deploy.yml — Customer File Validation (env.test)
- name: Validate customer .env.test is up-to-date
        run: |
          echo "Validating customer .env.test generation..."

          # Check if generated file exists
          if [ ! -f "apps/boilerplate/.env.test" ]; then
            echo "❌ Error: apps/boilerplate/.env.test does not exist"
            echo "This file should be generated and committed to git"
            echo "Run: cd apps/boilerplate && pnpm generate:env-test"
            exit 1
          fi

          # Compare files (must be identical - no transformations)
          if ! diff -q .env.test apps/boilerplate/.env.test > /dev/null 2>&1; then
            echo "❌ Error: apps/boilerplate/.env.test is outdated"
            echo ""
            echo "The root .env.test was updated but the generated customer version wasn't."
            echo "Please run: cd apps/boilerplate && pnpm generate:env-test"
            echo "Then commit the updated apps/boilerplate/.env.test"
            echo ""
            echo "Differences:"
            diff .env.test apps/boilerplate/.env.test || true
            exit 1
          fi

          echo "✅ Customer .env.test is up-to-date"
The validation checks four customer-critical file categories:
ValidationWhat It Checks
.env.testTest environment template matches root
CLAUDE.mdGenerated customer CLAUDE.md is current
.gitignoreMonorepo patterns removed in standalone version
docs/Generated docs have no monorepo path references
If any validation fails, the workflow exits with a clear error message explaining which file needs regeneration and the exact command to run.

Deploy Job

After tests pass, the deploy job builds and ships to Vercel:
deploy.yml — Vercel Build and Deploy
deploy:
    name: Deploy to Vercel
    needs: test
    runs-on: ubuntu-latest

    steps:
      - uses: actions/checkout@v4

      - name: Check Required Secrets
        run: |
          echo "Checking for required secrets..."

          if [ -z "${{ secrets.VERCEL_TOKEN }}" ]; then
            echo "❌ Error: VERCEL_TOKEN secret is not set"
            echo "Please add VERCEL_TOKEN to your GitHub repository secrets"
            echo "See docs/DEPLOYMENT_SETUP.md for instructions"
            exit 1
          fi

          if [ -z "${{ secrets.VERCEL_ORG_ID }}" ]; then
            echo "❌ Error: VERCEL_ORG_ID secret is not set"
            echo "Please add VERCEL_ORG_ID to your GitHub repository secrets"
            echo "Required value: team_Bgh9zp308TcrlrYLqGmuGLHN"
            exit 1
          fi

          if [ -z "${{ secrets.VERCEL_PROJECT_ID }}" ]; then
            echo "❌ Error: VERCEL_PROJECT_ID secret is not set"
            echo "Please add VERCEL_PROJECT_ID to your GitHub repository secrets"
            echo "Required value: prj_jf8vKRszL6Xt7IF8EwqxzfnwRXsl"
            exit 1
          fi

          echo "✅ All required secrets are configured"

      - uses: pnpm/action-setup@v3
        with:
          version: 10.15.1

      - uses: actions/setup-node@v4
        with:
          node-version: '20'
          cache: 'pnpm'

      - name: Install dependencies
        run: pnpm install --frozen-lockfile

      - name: Install Vercel CLI
        run: pnpm add -g vercel@latest

      - name: Pull Vercel Environment Information
        run: |
          echo "Pulling Vercel environment configuration..."
          vercel pull --yes --environment=production --token=${{ secrets.VERCEL_TOKEN }}
          echo "✅ Successfully pulled Vercel configuration"

      - name: Build Project Artifacts
        run: |
          echo "Building boilerplate app for production..."
          vercel build --prod --token=${{ secrets.VERCEL_TOKEN }}
          echo "✅ Build completed successfully"

      - name: Deploy to Vercel
        id: deploy
        run: |
          echo "Deploying to Vercel production..."
          # --archive=tgz compresses files before upload to avoid rate limits (>5000 files)
          DEPLOYMENT_URL=$(vercel deploy --prebuilt --prod --archive=tgz --token=${{ secrets.VERCEL_TOKEN }})
          echo "deployment-url=$DEPLOYMENT_URL" >> $GITHUB_OUTPUT
          echo "✅ Successfully deployed to: $DEPLOYMENT_URL"
Key implementation details:
  • vercel pull downloads the environment configuration from Vercel, ensuring the build uses the same variables as production
  • vercel build --prod builds the application locally in CI (faster than building on Vercel's servers)
  • --archive=tgz compresses the build output before uploading. This is critical for monorepo projects with 5,000+ files — without it, Vercel's API rate limits can cause upload failures
  • Commit comments provide direct links to the deployment URL for quick verification

Required GitHub Secrets

Configure these secrets in your GitHub repository under Settings > Secrets and variables > Actions:
SecretPurposeWhere to Find
VERCEL_TOKENAuthenticates Vercel CLIVercel Dashboard > Settings > Tokens
VERCEL_ORG_IDIdentifies your Vercel teamVercel Dashboard > Settings > General
VERCEL_PROJECT_IDIdentifies your Vercel projectVercel Dashboard > Project Settings > General
TURBO_TOKENTurborepo remote cache authvercel.com/account/tokens
TURBO_TEAMTurborepo team identifierYour Vercel team slug

PR Validation Workflow

The test.yml workflow runs on every pull request targeting master and on pushes to develop. It acts as a quality gate — PRs cannot merge until all checks pass.

PR Pipeline Structure

test (Single Job)
  ├── Install dependencies
  ├── Setup PostgreSQL + pgvector
  ├── Generate Prisma client
  ├── Push database schema
  ├── Type checking
  ├── Linting
  ├── Unit tests
  ├── Validate customer files
  ├── E2E smoke tests
  └── Upload test artifacts

Test Artifact Upload

After every run, the workflow uploads test results with 30-day retention:
test.yml — Artifact Upload
- name: Upload test results
        uses: actions/upload-artifact@v4
        if: always()
        with:
          name: test-results
          path: |
            coverage/
            playwright-report/
          retention-days: 30
Artifacts include:
  • coverage/ — Code coverage reports from Vitest (HTML, JSON, text)
  • playwright-report/ — E2E test results with screenshots and traces
Download these from the Actions > Artifacts section in GitHub when debugging failures.

Nightly E2E Suite

The e2e-full.yml workflow runs the complete E2E test suite daily at 2:00 AM UTC. This catches regressions that smoke tests might miss.

What It Runs

e2e-full.yml — Schedule and Trigger
name: Full E2E Test Suite

on:
  # Run daily at 2:00 AM UTC
  schedule:
    - cron: '0 2 * * *'
  # Allow manual trigger
  workflow_dispatch:
AspectValue
ScheduleDaily at 2:00 AM UTC
TestsFull suite on Chromium
Timeout30 minutes
Quality gatesType check + lint + unit tests + full E2E

Automatic Issue Creation

When the nightly suite fails, the workflow automatically creates a GitHub issue with the failure details:
e2e-full.yml — Auto-Create Issue on Failure
- name: Create issue on failure
        if: failure()
        uses: actions/github-script@v7
        with:
          script: |
            const title = '🔴 Full E2E Test Suite Failed';
            const body = `
            The scheduled full E2E test suite has failed.

            **Workflow Run:** ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}
            **Triggered by:** ${{ github.event_name }}
            **Branch:** ${{ github.ref_name }}
            **Commit:** ${{ github.sha }}

            Please review the test results and fix the failing tests.

            [View Playwright Report](${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }})
            `;

            // Check if there's already an open issue
            const issues = await github.rest.issues.listForRepo({
              owner: context.repo.owner,
              repo: context.repo.repo,
              state: 'open',
              labels: 'e2e-test-failure'
            });

            if (issues.data.length === 0) {
              await github.rest.issues.create({
                owner: context.repo.owner,
                repo: context.repo.repo,
                title: title,
                body: body,
                labels: ['e2e-test-failure', 'automated']
              });
            }
The issue includes:
  • Direct link to the failed workflow run
  • Branch and commit SHA for debugging
  • Link to the Playwright report artifact
  • Labels (e2e-test-failure, automated) for filtering
Only one issue is created — if an open issue with the e2e-test-failure label already exists, the workflow skips creation to avoid duplicates.

Quality Gates Summary

Every code path must pass these gates before reaching production:
GateWorkflowBlocks
TypeScript compilationdeploy.yml, test.ymlType errors
ESLintdeploy.yml, test.ymlCode quality violations
Unit tests (800+)deploy.yml, test.ymlLogic regressions
E2E smoke tests (19)deploy.yml, test.ymlCritical flow breakage
Customer file validationdeploy.yml, test.ymlOut-of-date generated files
Full E2E suite (186)e2e-full.ymlComprehensive regressions

Turborepo Remote Caching

All workflows use Turborepo remote caching to share build artifacts across CI runs. This is configured via two environment variables:
yaml
env:
  TURBO_TOKEN: ${{ secrets.TURBO_TOKEN }}
  TURBO_TEAM: ${{ secrets.TURBO_TEAM }}
When a previous run already built the same code, Turborepo downloads the cached result instead of rebuilding. This provides the biggest savings for:
  • Type checking — TypeScript compilation results are cached
  • Linting — ESLint results are cached
  • Building — Next.js build output is cached

Adding Custom Workflow Steps

To add your own steps to the pipeline, edit the workflow files in .github/workflows/. Common additions:

Add a Lighthouse audit

yaml
- name: Run Lighthouse CI
  uses: treosh/lighthouse-ci-action@v11
  with:
    urls: |
      ${{ steps.deploy.outputs.deployment-url }}
    budgetPath: ./lighthouse-budget.json
    uploadArtifacts: true

Add database migration

yaml
- name: Run database migrations
  run: pnpm --filter=@nextsaas/boilerplate prisma migrate deploy
  env:
    DATABASE_URL: ${{ secrets.DATABASE_URL }}

Add Slack notification

yaml
- name: Notify Slack
  if: success()
  uses: slackapi/slack-github-action@v1
  with:
    payload: |
      {"text": "Deployed to production: ${{ steps.deploy.outputs.deployment-url }}"}
  env:
    SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL }}

Troubleshooting

Tests pass locally but fail in CI

The most common causes are:
  1. Missing environment variables — CI uses apps/boilerplate/.env.test while local uses apps/boilerplate/.env.local. Ensure all variables needed for tests are in .env.test
  2. Timing issues — CI runners are slower than local machines. If tests use hardcoded timeouts, increase them or use Playwright's built-in waitFor patterns
  3. Database state — CI pushes a fresh schema with prisma db push. Ensure your tests don't depend on seed data that only exists locally
If a build fails with unexpected errors after a dependency update:
bash
# In your workflow, add before install:
- name: Clear Turborepo cache
  run: pnpm turbo run build --force
This bypasses the remote cache for a single run. If the build succeeds, the old cache was stale.

Deployment succeeds but site shows errors

  1. Check that all environment variables are set in Vercel's Production scope
  2. Verify database migrations have been applied: prisma migrate deploy
  3. Check Vercel Functions logs for runtime errors