Quick perfomance tips for static sites on nginx
22 May 2021
These are some simple things you can do if tuning a static site for performance and download size.
HTTP/2 and SSL
There was a time when bundling js and css into minified blobs was a necessity for a performant site, but HTTP/2 multiplexing has made this redundant.
While encryption is not explicitly a requirement for HTTP/2, agents such as Firefox and Chrome only support HTTP/2 over HTTPS. If you don't have HTTPS configured, your first step may be to set up letsencrypt.
Enabling HTTP/2 then simply a matter of adding
http2 to the listen directive in your static site config's server block:
listen 443 ssl http2; listen [2a01:4f8:c2c:4783::c0:ffee]:443 ssl http2;
For text content, consider installing brotli alongside gzip. It offers higher compression ratios with similar CPU overhead to gzip. If you install nginx from FreeBSD ports, brotli should be a built-in config option. I also found a decent looking guide for installing ngx-brotli on Ubuntu. It differs from other guides in that it does not recommend rebuilding the whole of nginx.
As with HTTP/2, there is nothing inherent to brotli which requires HTTPS, but user agents tend not to request it on plain HTTP connections.
You'll need to enable the brotli modules in the main nginx.conf:
load_module "modules/ngx_http_brotli_filter_module.so"; load_module "modules/ngx_http_brotli_static_module.so";
Then enable brotli compression for the set of static filetypes you intend to serve:
If gzip is not yet enabled, you should consider configuring that as well. It's as above, but
You may have noticed the
brotli_static on; line in the above config. This enables optional on-disk compression of static assets. That is, if you are serving index.html and create index.html.br containing a brotli compressed version, this compressed file will be served. This has two advantages: nginx no longer has to compress content on-the-fly and you can use much higher compression ratios without impacting client performance.
I can't provide specific examples for static site generators, but the idea is that each time you update a file you should invoke a hook which creates compressed versions of that file. If this is a command-line job, the invocation you need is something like:
gzip -k -9 -f $file brotli -k -Z -f $file
-k switch keeps the original - you'll need that for legacy agents. Maximum compression is set with
-9 for gzip and
-Z for brotli (apparently it goes to 11, but this may be a funny joke). The
-f switch forces overwrite. I'm not sure how these tools operate from an atomicity perspective. That is, is there a risk you'll serve a partially compressed file? I don't worry about it.
The steps outlined here should help you serve a reasonably light and high-performance site from commodity hardware to modern user agents. If you want a starting point for further tuning, you can find out about configuring queue sizes, port ranges and other kernel-y bits in "Tuning FreeBSD for the highload" and "Tuning NGINX for Performance".