Note: this article I wrote originally appeared on the Humaan blog.
This article gets pretty techy! If that doesn’t sound like your bag, here’s a quick summary: the HTTP network protocol has existed since the early days of the web, and it’s about to be succeeded by HTTP/2 which will make communications between servers and browsers more efficient. It also means we need to change the way we optimise our websites to take advantage of the technology, so we don’t work against it.
The dawn of HTTP/2 is upon us. Since 1999, we’ve been using the Hypertext Transfer Protocol version 1.1 which isn’t particularly efficient. After many years of debating, HTTP/2 has been standardised, approved, and is now on its way to a browser near you. Before we see what HTTP/2 brings to the table, we should have a look at how it came to be.
A brief history
The year is 2009. Google, not satisfied with the speed of the web, developed an internal project that was known at SPDY. SPDY (not an acronym, pronounced ‘speedy’) aimed to reduce page load time by multiplexing resources through one connection between server and client, rather than having to open a new connection for each resource required to load the page.
By early 2011, SPDY was running on all Google services and happily serving to millions of users. A year later, Facebook and Twitter both implemented SPDY on the majority of their services (and within 12 months, SPDY was on all their services).
Almost three years after the initial draft, HTTP/2, née HTTP 2.0 was approved as a standard by the Internet Engineering Steering Group (IESG) and proudly bears the moniker RFC 7540. Mum and dad would be so proud. Now that we know the history behind how HTTP/2 came to fruition, what does it actually mean for you and me?
Getting ready for HTTP/2
Thankfully, there’s no deadline for having to have HTTP/2 enabled on your server, and it’s fully backwards compatible with HTTP 1.1 clients. A request from a client to a server will specify an Upgrade header with the value h2 or h2c. If this header/token combination isn’t in the request, it’s safe to say they don’t support HTTP/2, so serve them content over HTTP 1.1. Otherwise, if the client does support HTTP/2, the connection is upgraded and settings from the header HTTP2-Settings are processed by the server.
For security and privacy reasons, SPDY was built with SSL as a requirement (that’s right, SPDY wouldn’t serve content in cleartext). When SPDY was first drafted into HTTP/2, a requirement for serving HTTP/2 over Transport Layer Security (TLS) 1.2 was requested. Given the significant hassles required to get SSL certificates configured correctly, not to mention the costs associated with purchasing and maintaining said certificate, this requirement was finally dropped and HTTP/2 can serve content over cleartext (when the Upgrade token h2c is specified). c for cleartext. Got it?
On a side note, the Let’s Encrypt project by the Internet Security Research Group (ISRG) that’s launching in September 2015 aims to take the pain and cost out of SSL certificates. I personally can’t wait until the project is publicly available to everyone in a few months.
Support for HTTP/2 is relatively good at this stage, given that the standard was only approved several months ago. All the common browsers have support for HTTP/2, or will have support for it in the coming months. Chrome, Firefox, and Internet Explorer currently only support HTTP/2 over TLS (so no cleartext HTTP/2 until full implementations are completed). Server wise, IIS supports HTTP/2 in the Windows 10 beta, Apache supports HTTP/2 with mod_h2 (and little hacks), but this should be improved soon. Nginx is still working on a module, which should be released by the end of 2015.
HTTP/2, like SPDY, has the ability to keep the connection between the client and server open, continuously send data upstream and downstream without having to open a new connection for each request. This means there’s no ping pong match between the client and the server – it just goes through the existing open connection.
A change in process
To work around the overhead required for each resource to be downloaded, practises like image spriting and resource concatenation have become the de facto way to improve your site’s performance. Ironically, this practise will actually be harmful to websites that serve to HTTP/2 compatible clients.
Which means… no more concatenating! When browsing to your site, you can safely serve the core CSS required, and another CSS file required to render page-specific contents. This means that if the user only visits one page, they’ve only downloaded the core styles and styles specific to the page. Depending on the size of your CSS, you could be saving tens or hundreds of kilobytes in each request. Phenomenal stuff when you think about it.
With the lower cost of sending data to the client, this doesn’t mean you can go back to serving uncompressed JPEGs, PNGs and the like. You still need to be smart about what assets are sent to the client, along with the size of those assets. JavaScript module loaders like RequireJS will more than likely see another rise in popularity as setting up the r.js optimiser/combiner tool will not be required.
Ready for prime time?
Depending on the timeliness of full HTTP/2 support by browser and server vendors, we’re hopeful that by early 2016 the Humaan dev team will be developing with the HTTP/2-first mindset.
While the HTTP/2 standard isn’t perfect, it’s a heck of a lot better than HTTP 1.1 in many ways, the future for fast websites looks rather bright indeed. To learn more about HTTP/2, I highly recommend reading “http2 explained” by Daniel Stenberg (the genius behind cURL).