Skip to main content

Baseline Benchmark

Capture baseline metrics before you change anything. If you cannot reproduce a "before" state, you cannot prove an "after" win.

Rules (Keep It Comparable)

  • Use consistent test conditions (device profile, region, caching state).
  • Run repeated tests and use the median.
  • Benchmark representative URLs, not only the homepage.
caution

If you change caching during the project, write it down. Comparing cached and uncached results without context produces bad decisions.

What to Test

Pick 3-5 URLs:

  • Homepage
  • A content page (post/article)
  • A dynamic route (search, contact)
  • Cart/checkout and account pages (if applicable)

What to Record

  • Lab runs: waterfall notes and repeatable numbers (LCP/INP/CLS indicators, TTFB).
  • Field signals: Search Console / RUM trend notes (if available).
  • Cache headers: CDN and origin cache signals.
Baseline worksheet template
baseline-worksheet.txt
# Baseline: example.com
# Date:
# Tester:
# Test conditions:

## URLs
- /
- /some-post/
- /?s=test

## Notes
- Main LCP element:
- Top third-party scripts:
- Render blockers:

## Cache headers
- cf-cache-status:
- origin cache header:

## Lab (median of repeated runs)
| URL | device profile | TTFB | notes |
|---|---|---|---|
| / | | | |

## Field (if available)
- CWV trend notes:

Quick Checks

TTFB quick check (repeat and take the median)
for i in {1..5}; do
curl -o /dev/null -s -w "TTFB: %{time_starttransfer}s\n" https://example.com/
sleep 2
done
Cache header sanity check
curl -sI https://example.com/ | grep -iE 'cf-cache-status|x-litespeed-cache|x-cache|cache-control'

What's Next