cURL is an amazing tool (available by default on Macs and Linux) that allows an administrator to remotely transfer data and most commonly used against URLs (i.e., websites). Think of it as a terminal-based browser that doesn’t try to parse the HTML.
For example, if you want to get the HTML content for noc.org, all you would need to do is type:
curl noc.org
The response will be the sites source code. You can also run it with -D to print the headers:
$ curl -s -D – https://noc.org | head
HTTP/2 200
server: nginx
date: Tue, 04 May 2021 17:09:58 GMT
content-type: text/html; charset=UTF-8
vary: Accept-Encoding
noc-cdn-location: cdn-edge-usa-west-la
noc-cdn-cachestatus: HIT
One of the cool, and interesting, options from curl is that it also allows you to get timing details from the HTTP/HTTPS request.
For example, you can extract the time it took to do the DNS lookup, connect time, time to first byte, total time and quite a bit more. They are available via some curl variables:
Parameter | Description |
---|---|
$time_namelookup | Time for DNS lookup |
$time_starttransfer | Time to first byte |
$time_connect | Connect time |
$time_total | Total time |
They can be formatted with the -w (write-out) option. So if you want to get these 4 values for an HTTPS request to noc.org, you can do it with this command:
$ curl -o /dev/null -w “DNS Lookup: %{time_namelookup}sec\nTime to connect: %{time_connect} sec\nTime to first byte: %{time_starttransfer} sec\nTotal time: %{time_total} sec\n” -s https://noc.org
DNS Lookup: 0.001506sec
Time to connect: 0.012004 sec
Time to first byte: 0.067635 sec
Total time: 0.089123 sec
Easy to see that the DNS lookup is pretty fast (likely in cache) and it took 0.01 secs to establish the TCP connection and 0.06 secs to get the first byte of the page and 0.08 secs to fully get all the content.
Or maybe try CNN.com to compare with your own site:
$ curl -o /dev/null -w “DNS Lookup: %{time_namelookup}sec\nTime to connect: %{time_connect} sec\nTime to first byte: %{time_starttransfer} sec\nTotal time: %{time_total} sec\n” -s https://cnn.com
DNS Lookup: 0.001483sec
Time to connect: 0.010323 sec
Time to first byte: 0.055365 sec
Total time: 0.815822 sec
And you can see that it took 0.8 seconds to get all the content.
And if you are ever having performance issues, you can use these variables to identify where the problem is. Those are not the only options available.
You can use %{size_download} to get the total size downloaded, or %{response_code} to get the HTTP response code and %{http_version} to get the HTTP version utilized.
Here is a fun example:
curl -o /dev/null -w “HTTP Version: %{http_version}\nPage size: %{size_download}\nResponse code: %{response_code}\nDNS Lookup: %{time_namelookup}sec\nTime to connect: %{time_connect} sec\nTime to first byte: %{time_starttransfer} sec\nTotal time: %{time_total} sec\n” -s https://www.cnn.com
HTTP Version: 2
Page size: 1109810
Response code: 200
DNS Lookup: 0.001340sec
Time to connect: 0.010408 sec
Time to first byte: 0.081696 sec
Total time: 0.744870 sec
Hope it helps you when troubleshooting performance issues or just testing how your site is loading.