background preloader


As of July 2012[update], the group developing SPDY has stated publicly that it is working toward standardisation (available as an Internet Draft).[3] The first draft of HTTP 2.0 is using SPDY as the working base for its specification draft and editing.[4] Design[edit] The goal of SPDY is to reduce web page load time.[9] This is achieved by prioritizing and multiplexing the transfer of web page subresources so that only one connection per client is required.[1][10] TLS encryption is nearly ubiquitous in SPDY implementations, and transmission headers are gzip- or DEFLATE-compressed by design[11] (in contrast to HTTP, where the headers are sent as human-readable text). Moreover, servers may hint or even push content instead of awaiting individual requests for each resource of a web page.[12] SPDY requires the use of SSL/TLS (with TLS extension NPN), and does not support operation over plain HTTP. Relation to HTTP[edit] Caching[edit] Protocol support[edit] Protocol versions[edit] See also[edit] Wireshark · Go deep. ICSI Netalyzr Stochastic Fair Blue (SFB) for the Linux kernel Stochastic Fair Blue (SFB) is an active queue management algorithm for packet routers that attempts to simultaneously: bound queue length (and hence latency); minimise packet drops; maximise link utilisation; be fair to reactive aggregates; reliably detect non-reactive aggregates (aggregates that don't respond to congestion control indications) and put them into a so-called penalty box. SFB doesn't require much tuning, and uses a very small amount of memory for book-keeping. The main issue with SFB is that the notion of fairness it pursues is not necessarily the one you want: SFB enforces resource fairness between flows, meaning that all flows get roughly the same amount of buffer space. It is not clear how well this translates into fairness in throughput between flows, notably with varying RTT and different implementations of congestion control. In other words, SFB will neither guarantee the same rate in bytes per second or packets per second between flows, but it will come pretty close.

RED in a Different Light « jg's Ramblings Update May 8, 2012: “Controlling Queue Delay” describes a new AQM algorithm by Kathie Nichols and Van Jacobson. Congestion and queue management in Internet routers has been a topic since the early years of the Internet and its first congestion collapse. “Congestion collapse” does not have the same visceral feel to most using the Internet today, as it does for a few of us older people. Large parts of the early Internet actually stopped functioning almost entirely, and a set of algorithms were added to TCP/IP to ensure collapse not happen in the future. These include slow start, congestion avoidance, fast recovery, and at a later date, ECN (Explicit Congestion Notification), which has not so far seen wide use, and is a subject of ongoing research to determine if it can be deployed. Bufferbloat much larger than the RTT’s of the paths destroys the fundamental congestion avoidance of the TCP protocol’s servo system as I documented. Read on for more detail… AQM and Broadband The Way Forward

OpenWrt jg's Ramblings HFSC Scheduling with Linux © 2005 Klaus Rechert, Patrick McHardy © 2006 Martin A. Brown (translation) For complex traffic shaping scenarios, hierarchical algorithms are necessary. Current versions of Linux support the algorithms HTB and HFSC. While HTB basically rearranges token bucket filter (TBF) into a hierarchical structure, thereby retaining the principal characteristics of TBF, HFSC allows proportional distribution of bandwidth as well as control and allocation of latencies. When network access is used by multiple entities or for different services, then some sort of reasonable resource management is required. In one scenario two users share a single Internet connection with a 1000 kbit capacity; each user should have control of at least 500 kbit of that capacity at any given moment. Figure 1: Hierarchy of shared network access. Assume that all packets to be sent conform to a fixed size of 1500 bytes and all classes are sending at maximum rate. tc qdisc add dev $dev root handle $ID: hfsc [default $classID ]