As of July 2012[update], the group developing SPDY has stated publicly that it is working toward standardisation (available as an Internet Draft). The first draft of HTTP 2.0 is using SPDY as the working base for its specification draft and editing. Design The goal of SPDY is to reduce web page load time. This is achieved by prioritizing and multiplexing the transfer of web page subresources so that only one connection per client is required. TLS encryption is nearly ubiquitous in SPDY implementations, and transmission headers are gzip- or DEFLATE-compressed by design (in contrast to HTTP, where the headers are sent as human-readable text). Moreover, servers may hint or even push content instead of awaiting individual requests for each resource of a web page. SPDY requires the use of SSL/TLS (with TLS extension NPN), and does not support operation over plain HTTP. Relation to HTTP Caching Protocol support Protocol versions See also
www.xplot.orgWireshark · Go deep.ICSI NetalyzrStochastic Fair Blue (SFB) for the Linux kernelStochastic Fair Blue (SFB) is an active queue management algorithm for packet routers that attempts to simultaneously: bound queue length (and hence latency); minimise packet drops; maximise link utilisation; be fair to reactive aggregates; reliably detect non-reactive aggregates (aggregates that don't respond to congestion control indications) and put them into a so-called penalty box. SFB doesn't require much tuning, and uses a very small amount of memory for book-keeping. The main issue with SFB is that the notion of fairness it pursues is not necessarily the one you want: SFB enforces resource fairness between flows, meaning that all flows get roughly the same amount of buffer space. It is not clear how well this translates into fairness in throughput between flows, notably with varying RTT and different implementations of congestion control. In other words, SFB will neither guarantee the same rate in bytes per second or packets per second between flows, but it will come pretty close.
RED in a Different Light « jg's RamblingsUpdate May 8, 2012: “Controlling Queue Delay” describes a new AQM algorithm by Kathie Nichols and Van Jacobson. Congestion and queue management in Internet routers has been a topic since the early years of the Internet and its first congestion collapse. “Congestion collapse” does not have the same visceral feel to most using the Internet today, as it does for a few of us older people. Large parts of the early Internet actually stopped functioning almost entirely, and a set of algorithms were added to TCP/IP to ensure collapse not happen in the future. These include slow start, congestion avoidance, fast recovery, and at a later date, ECN (Explicit Congestion Notification), which has not so far seen wide use, and is a subject of ongoing research to determine if it can be deployed. Bufferbloat much larger than the RTT’s of the paths destroys the fundamental congestion avoidance of the TCP protocol’s servo system as I documented. Read on for more detail… AQM and Broadband The Way Forward