·7 min read

How to Check Core Web Vitals in GitHub Actions

Performance regressions sneak in with every deploy. A CSS change that bumps CLS above 0.25, an unoptimized image that pushes LCP past 4 seconds. By the time you notice, Google has already re-scored the page. This guide sets up automated Core Web Vitals checks in your CI pipeline so you catch regressions before they hit production rankings.

The three metrics that matter

Google uses three Core Web Vitals as ranking signals. Each one measures a different aspect of user experience.

MetricMeasuresGoodPoor
LCPLargest Contentful Paint< 2.5s> 4.0s
CLSCumulative Layout Shift< 0.1> 0.25
INPInteraction to Next Paint< 200ms> 500ms

The approach

Google offers the PageSpeed Insights (PSI) API for free. It runs Lighthouse on any public URL and returns structured performance data, including all three Core Web Vitals. The plan: call the PSI API from a GitHub Actions workflow after each deploy, parse the results, and fail the job if any metric crosses its threshold. No npm packages needed. Just curl and jq.

Step 1: Get a PageSpeed Insights API key

The API works without a key at low volume, but adding one gives you higher rate limits and better error messages. Generate one in the Google Cloud Console under APIs & Services. Then add it as a repository secret:

# GitHub repo → Settings → Secrets and variables → Actions
# Add: PAGESPEED_API_KEY = your_key_here

Step 2: The GitHub Actions workflow

This workflow runs after your deploy step completes. It hits the PSI API, extracts the three Core Web Vitals, compares them against thresholds, and fails the job if any score is bad.

# .github/workflows/web-vitals.yml
name: Core Web Vitals Check

on:
  workflow_run:
    workflows: ["Deploy"]
    types: [completed]
  workflow_dispatch:
    inputs:
      url:
        description: "URL to test"
        required: true

env:
  # Default URL:override with workflow_dispatch input
  TARGET_URL: ${{ github.event.inputs.url || 'https://yoursite.com' }}
  # Thresholds (Google "good" values)
  LCP_THRESHOLD: 2500    # milliseconds
  CLS_THRESHOLD: 0.1
  INP_THRESHOLD: 200     # milliseconds

jobs:
  check-vitals:
    runs-on: ubuntu-latest
    if: ${{ github.event.workflow_run.conclusion == 'success' || github.event_name == 'workflow_dispatch' }}
    steps:
      - name: Run PageSpeed Insights
        id: psi
        run: |
          RESPONSE=$(curl -s "https://www.googleapis.com/pagespeedonline/v5/runPagespeed?url=${TARGET_URL}&strategy=mobile&key=${{ secrets.PAGESPEED_API_KEY }}")

          # Extract lab metrics
          LCP=$(echo "$RESPONSE" | jq '.lighthouseResult.audits["largest-contentful-paint"].numericValue')
          CLS=$(echo "$RESPONSE" | jq '.lighthouseResult.audits["cumulative-layout-shift"].numericValue')
          INP=$(echo "$RESPONSE" | jq '.lighthouseResult.audits["interaction-to-next-paint"].numericValue // 0')
          PERF_SCORE=$(echo "$RESPONSE" | jq '.lighthouseResult.categories.performance.score')

          echo "lcp=$LCP" >> $GITHUB_OUTPUT
          echo "cls=$CLS" >> $GITHUB_OUTPUT
          echo "inp=$INP" >> $GITHUB_OUTPUT
          echo "score=$PERF_SCORE" >> $GITHUB_OUTPUT

          echo "## Core Web Vitals Results" >> $GITHUB_STEP_SUMMARY
          echo "| Metric | Value | Threshold | Status |" >> $GITHUB_STEP_SUMMARY
          echo "|--------|-------|-----------|--------|" >> $GITHUB_STEP_SUMMARY
          echo "| LCP | ${LCP}ms | ${LCP_THRESHOLD}ms | $([ $(echo "$LCP < $LCP_THRESHOLD" | bc -l) -eq 1 ] && echo 'Pass' || echo 'FAIL') |" >> $GITHUB_STEP_SUMMARY
          echo "| CLS | $CLS | $CLS_THRESHOLD | $([ $(echo "$CLS < $CLS_THRESHOLD" | bc -l) -eq 1 ] && echo 'Pass' || echo 'FAIL') |" >> $GITHUB_STEP_SUMMARY
          echo "| INP | ${INP}ms | ${INP_THRESHOLD}ms | $([ $(echo "$INP < $INP_THRESHOLD" | bc -l) -eq 1 ] && echo 'Pass' || echo 'FAIL') |" >> $GITHUB_STEP_SUMMARY
          echo "| Performance Score | $PERF_SCORE |:|:|" >> $GITHUB_STEP_SUMMARY

      - name: Check thresholds
        run: |
          LCP=${{ steps.psi.outputs.lcp }}
          CLS=${{ steps.psi.outputs.cls }}
          INP=${{ steps.psi.outputs.inp }}

          FAILED=0

          if [ $(echo "$LCP > $LCP_THRESHOLD" | bc -l) -eq 1 ]; then
            echo "::error::LCP is ${LCP}ms (threshold: ${LCP_THRESHOLD}ms)"
            FAILED=1
          fi

          if [ $(echo "$CLS > $CLS_THRESHOLD" | bc -l) -eq 1 ]; then
            echo "::error::CLS is $CLS (threshold: $CLS_THRESHOLD)"
            FAILED=1
          fi

          if [ $(echo "$INP > $INP_THRESHOLD" | bc -l) -eq 1 ]; then
            echo "::error::INP is ${INP}ms (threshold: ${INP_THRESHOLD}ms)"
            FAILED=1
          fi

          if [ $FAILED -eq 1 ]; then
            echo "::error::Core Web Vitals check failed. Fix the issues above before merging."
            exit 1
          fi

          echo "All Core Web Vitals within thresholds."

How this works

Trigger: The workflow fires automatically after your deploy workflow succeeds. You can also trigger it manually via workflow_dispatch with any URL.

PSI API call: A single curl request to the PageSpeed Insights API with strategy=mobile. Mobile is what Google uses for ranking, so that is what you should test.

Threshold check:Each metric is compared against the Google “good” thresholds. If any metric fails, the job exits with code 1, which marks the workflow as failed in GitHub.

Summary table: Results are written to $GITHUB_STEP_SUMMARY so you see a formatted table directly in the Actions run page. No need to parse logs.

Running on pull requests with preview URLs

If you use Vercel, every PR gets a preview deployment URL. You can pass this URL into the workflow to check Core Web Vitals before merging:

# Add this trigger to the workflow
on:
  deployment_status:

jobs:
  check-vitals:
    if: github.event.deployment_status.state == 'success'
    env:
      TARGET_URL: ${{ github.event.deployment_status.target_url }}
    # ... same steps as above

This catches regressions before they reach production. Every PR with a performance hit shows a red check in GitHub.

Limitations of this approach

Lab data only. The PSI API returns lab (simulated) data, not real user data from the Chrome User Experience Report. Lab data is consistent and good for CI, but it may differ from what real users experience on slow connections.

Single page per run. This workflow checks one URL. If you need to check multiple pages, loop through an array of URLs or use a tool like SEOLint that scans multiple pages and aggregates results.

INP requires interaction. Lab-based INP testing can only simulate interactions. For real INP data, you need field data from CrUX or a real user monitoring tool.

Going further: full SEO checks in CI

Core Web Vitals are one part of the picture. A deploy can also break meta tags, remove structured data, or introduce broken links. SEOLint runs a full audit (performance, SEO, accessibility) and returns structured JSON you can parse in the same workflow. One API call replaces the PSI check above and adds 30+ additional checks.

FAQ

Is the PageSpeed Insights API free?

Yes. The PageSpeed Insights API is free to use with or without an API key. Without a key, you get a generous rate limit (around 25,000 queries per day). Adding an API key via Google Cloud Console gives you higher limits and better monitoring.

What are good Core Web Vitals thresholds?

Google considers these scores 'good': LCP under 2.5 seconds, CLS under 0.1, and INP under 200 milliseconds. For CI checks, these are the thresholds you should use to fail builds.

Should I test mobile or desktop Core Web Vitals?

Test mobile. Google uses mobile-first indexing, so your mobile Core Web Vitals scores are what affect rankings. Mobile is also where performance issues surface first due to slower processors and network connections.

Can I check Core Web Vitals on localhost?

No. The PageSpeed Insights API requires a publicly accessible URL. For CI workflows, run the check against your staging or preview deployment URL after deploy, not during the build step.

Automate your full SEO audit in CI

Core Web Vitals, meta tags, structured data, accessibility. One API call.