Skip to main content

TTFB Case Study

This case study walks through repairing a slow Time to First Byte (TTFB) on an international WooCommerce platform. With EU traffic hitting a US-based origin, TTFB spiked above 1.2s. We'll audit the stack and use a combination of server capacity, caching, and CDN edge delivery to bring global TTFB down to sub-100ms.

Quick Summary

A 1.2s TTFB bottleneck resulted from an underpowered VPS processing dynamic WooCommerce work for global traffic with minimal edge caching. By upgrading the server, enabling HTML caching, and using Cloudflare APO, international latency dropped substantially.

The Scenario

DetailValue
Site typeB2C WooCommerce (USA + EU distribution)
ProblemThe entire domain feels exceptionally sluggish, drastically worse in European countries
Initial TTFB1.2s (Poor)US Average: 450ms, EU Average: 1.3s
Initial LCP3.8s (Borderline Fail)
User ImpactHorrendous EU bounce rates paired with severe checkout abandonment
User ImpactElevated EU bounce rate and checkout abandonment

Visible Symptoms

  • The browser remains blank for over a second before HTML begins transmitting.
  • Standard authenticated WooCommerce AJAX requests randomly drag for 800ms.
  • Search Console throws massive structural alerts demanding the webmaster "Reduce server response time."

Step-by-Step Diagnosis

Step 1: Multi-Region TTFB Profiling

We run synthetic WebPageTest simulations utilizing nodes across multiple continents to map baseline geographic strain.

wpt-geographic.txt
WebPageTest TTFB Scans:
US (Virginia): 450ms
EU (Frankfurt): 1.3s
Asia (Tokyo): 1.5s

The US origin connection at 450ms is fundamentally weak, but the transoceanic 1.3s latency verifies that physical distance to the un-cached origin is destroying performance.

Step 2: Backend Capacity Analysis

server-metrics.txt
DevTools → Timing Trace:
Waiting Time (Raw TTFB) = 1.2s

Deep Server Logs:
- Severe PHP-FPM active worker queue pileups during minor traffic events
- MySQL slow query log flags 80+ sequential un-cached queries per product load

Step 3: Identify Root Bottlenecks

  1. Underpowered Hardware: A 2-core / 2GB RAM VPS buckling beneath WordPress + WooCommerce dynamic queries.
  2. Missing Page Cache: Without LiteSpeed Cache functioning, every anonymous hit forces a full PHP/MySQL generation.
  3. No Edge Distribution: European users are physically dialing into a throttled server in New York for every asset.

Solution: Layer-by-Layer Fixes

LayerBottleneckFix AppliedExpected Impact
Server2-core / 2GB limitationUpgrade to 4 cores, 8GB RAM, NVMe; enable OPcacheBetter origin capacity
Cache (HTML)Un-cached dynamic hitsSwitch on LiteSpeed Cache to bake HTML responses.TTFB slashed instantly for generic visitors.
Cache (Origin)Un-cached logged-in sessionsActive Redis Object Cache backend.Database queries minimized significantly.
Edge (CDN)Raw trans-oceanic transitWire domain into Cloudflare APO featuring tiered caching.Static delivery under 100ms globally.
NetworkSlower SSL handshakeEnforce TLS 1.3.Lower connection overhead

Fix Execution Sequence

Sequential deployment proves which piece of the stack handles which load.

execution-timeline.txt
1. Upgrade the VPS Infrastructure:
Scaling from shared CPU to 4 dedicated cores on NVMe.
→ Raw US TTFB stabilizes from 450ms down to 280ms permanently.

2. Activate HTML Page Caching (LSCache):
Preventing PHP compilation on routine hits.
→ Cached US TTFB crashes from 280ms effectively to 80ms.

3. Distribute the Edge (Cloudflare APO):
Pushing those baked HTML fragments to global edge nodes.
→ EU TTFB crashes from 1.3s down to an incredible 95ms.

4. Implement Object Caching (Redis):
Alleviates strain precisely during active WooCommerce cart manipulation.
→ Uncached logged-in execution improves from 280ms to 180ms.

Common Mistakes in TTFB Diagnosis

MistakeExplanationSolution
Testing strictly from your local home networkYou are testing your own physical proximity to the server, completely blind to international latency.Mandate utilizing global nodes (WebPageTest, KeyCDN tool) for true TTFB mapping.
Prematurely pushing a weak server to a CDNMoving a flawed 1.5s origin TTFB behind a CDN merely means the edge nodes serve users a 1.5s lag globally when cache misses occur.Fix raw server TTFB to < 300ms before engaging any CDN topology.
Ignoring logged-in user traffic mappingWooCommerce shopping carts actively bypass global HTML page caching intentionally.Ensure Redis Object Caching is running smoothly to protect the database against authenticated load.

Hands-On Practice

Perform a Benchmark Verification

Task: Pull open a SSH terminal and fire a direct cURL ping to measure TTFB before and after triggering a cache plugin purge.

curl-ttfb.sh
# Execute directly targeting your domain:
curl -o /dev/null -s -w "Raw TTFB: %{time_starttransfer}\n" https://yourdomain.com

Task: Use KeyCDN's performance testing tool to compare TTFB between nodes in Frankfurt, New York, and Singapore.

Results Summary

MetricBeforeAfterChange Improvement
TTFB (US)450ms80ms−82%
TTFB (EU)1.3s95ms−93%
LCP (Global)3.8s1.8s−53%
INP400ms180ms−55%
Conclusion

TTFB is strongly influenced by infrastructure and caching strategy. By right-sizing the VPS, deploying HTML caching, and using Cloudflare APO to serve content closer to EU users, a 1.3s delay was reduced to sub-100ms.

What's Next