Notes on HTTP/2
Someone at work today linked my team this talk by Brad Fitzpatrick which has a really great introduction to HTTP/2.
HTTP/2 is the most recent version of the HTTP specification. The original HTTP Spec was released in 1991 and defined a single
GET keyword. You’d open a TCP socket to a remote server that spoke HTTP, pass
GET and the document address (like
GET /mydocument), and the web server would helpfully return the document you requested. That was it.
This worked pretty well for a while, but people realized it might be nice to use this same protocol to actually send material data the other way, from the client to the server. The HTTP/1.0 spec was released in 1996 and with it came the first notion of HTTP verbs. Rather than just
GET and a document, you could
POST some data to an endpoint using the same syntax.
HTTP/1.0 also defined the notion of status codes for the first time. The server could start representing error conditions by returning a 400 for a client error or a 500 for a server error. This was, of course, the beginnings of the modern web, where we still use these primitives today! HTTP/1.0 also defined the concept of headers, which allow semantic key-value metadata to be attached to requests and responses.
In both HTTP/0.9 and HTTP/1.0, each request/response pair would use a unique TCP connection. When the client would make a request, it would open a TCP connection to the server, and send some data. The server would then send some data back and close the connection.
This is pretty inefficient, because it means the client and server go through lots of unique TCP connections. This isn’t great because TCP connections are fairly expensive to set up. They require a three way handshake, just to initialize the connection, and then the bandwidth is limited for a while because of TCP slow start.
HTTP/1.1 came soon after, in 1999. It made two important optimizations to the way HTTP handled underlying connections. The first was the
Connection header. By default in HTTP/1.1, TCP connections are kept open after each request/response cycle, unless the
Connection: Close header is set. This means clients don’t need to do the entire TCP handshake for each request they make, and they can avoid hitting slow start.
HTTP/1.1 also introduces this concept of “pipelining” whereby multiple HTTP requests can be sent at once. The client fires off several HTTP requests after one another and the server responds in order. The responses are sequenced in the same order that the requests came in, but there’s no need for the client to block before issuing the next request it knows it needs.
I was a little surprised to hear Brad write HTTP/1.1 pipelining off as a nearly complete failure that’s seen very little adoption. As an assignment in a networking class in college, we had to write a pipelined web server, and I assumed this behavior was pretty standard on the internet. I thought it was pretty tough at the time and it turns out I’m not the only one who finds pipelining confusing. Enough servers don’t implement the behavior correctly that most clients just opt to not take advantage of it at all. Google Chrome, for example, doesn’t bother pipelining requests and achieves request concurrency by just opening several (6) concurrent TCP connections to the server, which it does the equivalent of
There are some other important features in HTTP/1.1 that we still quite quite heavily. HTTP/1.1 brought virtual hosting via the
Host header which allowed multiple “virtual” domains to be hosted off the same IP. There’s also this feature called byte ranges that I didn’t know about, which lets the server send portions of data incrementally using the
Partial Content 206 status code! This can be useful for transferring large content like images or videos without sending all the data at once.
(Thanks @georgevreilly for the notes here!)
What’s new in HTTP/2
HTTP/2 is the next iteration of the HTTP protocol. It doesn’t actually change the behavior of the protocol, a
GET request is still a
GET request, and a 404 still means not found. HTTP/2 is simply an iteration on the wire protocol of HTTP. It’s based on the SPDY protocol that Google introduced as part of Chrome in 2013 for connecting to their own servers. One of the reasons that it’s a little bit controversial is because it’s a binary protocol. This makes it a little harder to debug and implement, but allows more complex multiplexing and chunking and has the potential to condense the amount of data on the wire at a protocol level.
One important thing about HTTP/2 is that we can multiplex data much more effectively over a single TCP connection. This solves a problem with HTTP/1.1 called “Head of Line” blocking. To illustrate, imagine a long line of people that are buying train tickets. A couple people get serviced really quickly, but then some guy who doesn’t know what he’s doing gets up to the counter. He’s asking all kinds of questions, trying to figure out the payment system, not understanding the attendant’s answers and generally holding everyone else up. Even though your purchase is maybe going to be quick, it’s held up behind those of everyone before you in the line. The same thing can happen with HTTP requests. If there’s one request that takes a long time to service, then all the other requests are stuck behind it. This is true even when requests are pipelined because the order the server sends responses to needs to be the same as the order the client sends requests. Opening more connections (like Chrome) helps, but it can be expensive for the server to maintain 6x more connections than it otherwise would.
HTTP/2 lets us multiplex multiple HTTP request/response pairs over the same TCP connection, by introducing its own framing standard and breaking HTTP data into chunks on top of the underlying TCP connection. This is a bit complicated, and I really suggest watching the talk, where Brad talks his way through implementing an HTTP/2 stack in pure Go.
I’m not convinced HTTP/2 being a binary protocol is really all that problematic. HTTP/1.1 is increasingly being delivered encrypted by TLS or gzipped, either of which effectively make the protocol binary anyways, so it’s not clear we’ve really lost all that much by going all the way. Even though we can no longer
nc to see packets flying by, we have other tools that allow us to introspect connection data and state.
One other interesting thing about HTTP/2 is that it only really works over TLS. Similar to the way TLS negotiates a key cipher, HTTP/2 uses a new set of options called ALPN (Application-Layer Protocol Negotiation). This basically provides a way of letting the other end of the connection know what application protocols are or are not supported. This is built into the TLS spec, and indicates the connection should be upgraded to HTTP/2. Apparently there are other methods of upgrading connections to HTTP/2, but this is the only one that people actually use.
There are lots of other cool HTTP/2 features, like inbuilt Huffman Coding, cleaner ways of terminating the connection, and server push that allows the server to “force preload” content to the client.
There’s probably lots I’ve missed here. Let me know on twitter if I’ve missed any really important or cool features!