Why Use cURL for Performance Testing?
cURL is one of the most versatile command-line tools available on virtually every Linux, macOS, and Windows system. While browser developer tools and online speed test services are useful, cURL gives you precise, scriptable, and repeatable measurements of every phase of an HTTP request. You can isolate DNS resolution time from TLS negotiation, measure Time to First Byte (TTFB) independently from total transfer time, and automate these measurements across multiple endpoints or time periods.
For server administrators and DevOps engineers, cURL-based performance testing is invaluable for comparing response times before and after configuration changes, verifying CDN performance from different locations, monitoring TTFB trends, and diagnosing slow page loads. Unlike browser-based tools, cURL measurements are not affected by rendering, JavaScript execution, or browser extensions, giving you clean server-side timing data.
Understanding cURL Timing Variables
The -w (write-out) flag in cURL lets you output specific timing variables after a request completes. These variables correspond to different phases of the HTTP connection lifecycle:
- time_namelookup — Time from the start until DNS name resolution was completed. This measures how long it took to resolve the domain name to an IP address.
- time_connect — Time from the start until the TCP connection to the server was established. The difference between this and
time_namelookupgives you the pure TCP connection time. - time_appconnect — Time from the start until the TLS/SSL handshake was completed. The difference between this and
time_connectgives you the TLS negotiation time. This is zero for plain HTTP requests. - time_pretransfer — Time from the start until the file transfer was about to begin. This includes all setup work (DNS, TCP, TLS) plus any protocol-level negotiation.
- time_starttransfer — Time from the start until the first byte of the response body was received. This is your TTFB (Time to First Byte) and includes all connection setup plus server processing time.
- time_total — Total time for the entire operation, from start to finish including the complete response body transfer.
Basic Timing Command
The simplest way to see all timing information is with an inline format string:
curl -o /dev/null -s -w "\n\
DNS Lookup: %{time_namelookup}s\n\
TCP Connect: %{time_connect}s\n\
TLS Handshake: %{time_appconnect}s\n\
Pre-Transfer: %{time_pretransfer}s\n\
TTFB: %{time_starttransfer}s\n\
Total Time: %{time_total}s\n\
Download Size: %{size_download} bytes\n\
HTTP Code: %{http_code}\n" \
https://example.com
The flags explained:
-o /dev/null— Discards the response body (we only care about timing)-s— Silent mode (hides progress bar)-w— Specifies the format string with timing variables
Sample Output
DNS Lookup: 0.024s
TCP Connect: 0.058s
TLS Handshake: 0.131s
Pre-Transfer: 0.131s
TTFB: 0.247s
Total Time: 0.312s
Download Size: 48320 bytes
HTTP Code: 200
From this output, you can calculate each phase duration: DNS took 24ms, TCP connection took 34ms (58 - 24), TLS handshake took 73ms (131 - 58), server processing took 116ms (247 - 131), and the content download took 65ms (312 - 247).
Creating a Reusable Format File
For frequent use, create a format file that you can reference with -w @filename:
# Create the format file
cat > ~/.curl-timing.txt << 'EOF'
\n
url: %{url_effective}\n
redirect: %{redirect_url}\n
http_code: %{http_code}\n
remote_addr: %{remote_ip}:%{remote_port}\n
\n
DNS Lookup: %{time_namelookup}s\n
TCP Connect: %{time_connect}s\n
TLS Handshake: %{time_appconnect}s\n
Pre-Transfer: %{time_pretransfer}s\n
Redirect Time: %{time_redirect}s\n
TTFB: %{time_starttransfer}s\n
Total Time: %{time_total}s\n
\n
Download Size: %{size_download} bytes\n
Upload Size: %{size_upload} bytes\n
Average Speed: %{speed_download} bytes/s\n
\n
EOF
# Use the format file
curl -o /dev/null -s -w "@$HOME/.curl-timing.txt" https://example.com
This gives you a comprehensive view of every request, including the resolved IP address (useful for verifying CDN edge server selection) and redirect information.
Measuring TTFB (Time to First Byte)
TTFB is one of the most important web performance metrics. It measures the time from when the client sends the request to when it receives the first byte of the response. A high TTFB indicates server-side performance issues, slow database queries, or network latency problems:
# Quick TTFB measurement
curl -o /dev/null -s -w "TTFB: %{time_starttransfer}s\n" https://example.com
# TTFB with server processing time isolated
curl -o /dev/null -s -w "\
Connection setup: %{time_appconnect}s\n\
Server processing: $(echo '%{time_starttransfer} - %{time_appconnect}' | bc)s\n\
TTFB: %{time_starttransfer}s\n" https://example.com
General TTFB benchmarks:
- Under 200ms — Excellent. Typical for static content served from a CDN or well-optimized dynamic sites.
- 200-500ms — Good. Acceptable for most dynamic websites with database queries.
- 500ms-1s — Needs improvement. Investigate caching, database optimization, or server resources.
- Over 1s — Poor. Likely indicates server overload, unoptimized queries, or missing cache layers.
Comparing Performance With and Without CDN
To verify that your CDN is actually improving performance, test the same URL both through the CDN and directly against your origin server:
# Test through CDN (normal resolution)
echo "=== Through CDN ==="
curl -o /dev/null -s -w "TTFB: %{time_starttransfer}s | Total: %{time_total}s | IP: %{remote_ip}\n" \
https://example.com
# Test directly against origin (bypass CDN using --resolve)
echo "=== Direct to Origin ==="
curl -o /dev/null -s -w "TTFB: %{time_starttransfer}s | Total: %{time_total}s | IP: %{remote_ip}\n" \
--resolve example.com:443:203.0.113.10 \
https://example.com
The --resolve flag overrides DNS resolution, letting you force the request to a specific IP without modifying your hosts file. This is invaluable for A/B testing different servers, CDN configurations, or load balancer backends. For a deeper understanding of CDN architecture, see our CDN overview.
Scripting Repeated Tests for Benchmarking
Single measurements can be misleading due to network variability, cold caches, and server load fluctuations. Running multiple tests and analyzing the results gives you much more reliable data:
#!/bin/bash
# benchmark.sh - Run repeated cURL timing tests
URL="${1:-https://example.com}"
RUNS="${2:-10}"
OUTPUT="curl-benchmark-$(date +%F-%H%M%S).csv"
echo "url,run,dns_lookup,tcp_connect,tls_handshake,ttfb,total_time,http_code,remote_ip" > "$OUTPUT"
for i in $(seq 1 "$RUNS"); do
curl -o /dev/null -s -w "%{url_effective},$i,%{time_namelookup},%{time_connect},%{time_appconnect},%{time_starttransfer},%{time_total},%{http_code},%{remote_ip}\n" \
"$URL" >> "$OUTPUT"
sleep 1 # Pause between requests to avoid rate limiting
done
echo "Results saved to $OUTPUT"
echo ""
echo "Summary:"
echo "--------"
awk -F',' 'NR>1 {
dns+=$3; tcp+=$4; tls+=$5; ttfb+=$6; total+=$7; n++
}
END {
printf " Avg DNS: %.3fs\n", dns/n
printf " Avg TCP: %.3fs\n", tcp/n
printf " Avg TLS: %.3fs\n", tls/n
printf " Avg TTFB: %.3fs\n", ttfb/n
printf " Avg Total: %.3fs\n", total/n
printf " Runs: %d\n", n
}' "$OUTPUT"
This script outputs a CSV file that you can import into a spreadsheet or graphing tool for visual analysis. The one-second delay between requests prevents your tests from being rate-limited and gives each request a clean network state.
Comparing Multiple URLs
#!/bin/bash
# compare-urls.sh - Compare TTFB across multiple URLs
URLS=(
"https://example.com"
"https://example.com/api/endpoint"
"https://cdn.example.com/static/main.css"
)
printf "%-45s %10s %10s %10s\n" "URL" "DNS" "TTFB" "Total"
printf "%-45s %10s %10s %10s\n" "---" "---" "----" "-----"
for url in "${URLS[@]}"; do
result=$(curl -o /dev/null -s -w "%{time_namelookup} %{time_starttransfer} %{time_total}" "$url")
dns=$(echo "$result" | awk '{print $1}')
ttfb=$(echo "$result" | awk '{print $2}')
total=$(echo "$result" | awk '{print $3}')
printf "%-45s %9ss %9ss %9ss\n" "$url" "$dns" "$ttfb" "$total"
done
Testing HTTP/2 and HTTP/3 Performance
Modern web servers and CDNs support HTTP/2 and HTTP/3. You can explicitly test different protocol versions with cURL:
# Force HTTP/1.1
curl -o /dev/null -s -w "HTTP/1.1 - TTFB: %{time_starttransfer}s | Total: %{time_total}s\n" \
--http1.1 https://example.com
# Force HTTP/2
curl -o /dev/null -s -w "HTTP/2 - TTFB: %{time_starttransfer}s | Total: %{time_total}s\n" \
--http2 https://example.com
# Test HTTP/3 (requires curl 7.66+ compiled with HTTP/3 support)
curl -o /dev/null -s -w "HTTP/3 - TTFB: %{time_starttransfer}s | Total: %{time_total}s\n" \
--http3 https://example.com
# Check which HTTP version was actually used
curl -o /dev/null -s -w "Protocol: %{http_version}\n" https://example.com
HTTP/2 typically shows the biggest improvement on pages with many resources, as it supports multiplexing multiple requests over a single TCP connection. For single resource measurements, the difference may be minimal. HTTP/3 uses QUIC (UDP-based) and can show significant improvements on high-latency or lossy networks due to its zero-round-trip connection establishment.
Diagnosing Slow Responses
When a page is slow, the timing breakdown tells you exactly where the bottleneck is:
- High time_namelookup — DNS resolution is slow. Check your DNS resolver, consider switching to a faster one, or verify that DNS caching is working.
- High TCP connect time (time_connect minus time_namelookup) — Network latency between you and the server. This could indicate geographic distance or network congestion.
- High TLS time (time_appconnect minus time_connect) — TLS negotiation is slow. Check your TLS configuration, enable session resumption, or verify that OCSP stapling is configured.
- High server processing time (time_starttransfer minus time_appconnect) — The server is taking too long to generate the response. Investigate application code, database queries, or server resource constraints.
- High download time (time_total minus time_starttransfer) — The response body is large or the bandwidth is limited. Consider enabling compression or reducing response size.
Additional Useful cURL Options for Testing
# Follow redirects and include redirect time in measurements
curl -o /dev/null -s -L -w "Redirects: %{num_redirects} | Redirect time: %{time_redirect}s | Total: %{time_total}s\n" \
https://example.com
# Test with compression to see the real-world size
curl -o /dev/null -s --compressed -w "Size: %{size_download} bytes | Total: %{time_total}s\n" \
https://example.com
# Test a specific HTTP method
curl -o /dev/null -s -X POST -w "TTFB: %{time_starttransfer}s\n" \
-d '{"key":"value"}' -H "Content-Type: application/json" \
https://example.com/api/endpoint
# Include response headers in the output for debugging
curl -s -D - -o /dev/null -w "\nTTFB: %{time_starttransfer}s\n" https://example.com
Summary
cURL's timing variables give you precise, scriptable measurements of every phase of an HTTP request, from DNS lookup through content delivery. By creating reusable format files and benchmark scripts, you can systematically monitor performance, compare CDN effectiveness, diagnose bottlenecks, and track improvements over time. The ability to isolate DNS resolution, TLS negotiation, server processing, and transfer time makes cURL an essential tool in any web performance testing toolkit. For production monitoring, consider integrating cURL-based checks into your infrastructure monitoring alongside NOC.org CDN for optimized delivery.
Want to improve your website's performance? Explore NOC.org plans for CDN and monitoring solutions.