background preloader

SPDY

SPDY
As of July 2012[update], the group developing SPDY has stated publicly that it is working toward standardisation (available as an Internet Draft).[3] The first draft of HTTP 2.0 is using SPDY as the working base for its specification draft and editing.[4] Design[edit] The goal of SPDY is to reduce web page load time.[9] This is achieved by prioritizing and multiplexing the transfer of web page subresources so that only one connection per client is required.[1][10] TLS encryption is nearly ubiquitous in SPDY implementations, and transmission headers are gzip- or DEFLATE-compressed by design[11] (in contrast to HTTP, where the headers are sent as human-readable text). Moreover, servers may hint or even push content instead of awaiting individual requests for each resource of a web page.[12] SPDY requires the use of SSL/TLS (with TLS extension NPN), and does not support operation over plain HTTP. Relation to HTTP[edit] Caching[edit] Protocol support[edit] Protocol versions[edit] See also[edit]

http://en.wikipedia.org/wiki/SPDY

Related:  nginx optimization

NGINX as a SPDY load balancer for Node.js Recently we wanted to integrate SPDY into our stack at SocialRadar to make requests to our API a bit more speedy (hurr hurr). Particularly for multiple subsequent requests in rapid succession, avoiding that TCP handshake on every request would be quite nice. Android has supported SPDY in its networking library for a little while and iOS added SPDY support in iOS 8 so we could get some nice performance boosts on our two most used platforms. Previously, we had clients connecting via normal HTTPS on port 443 to an Elastic Load Balancer which would handle the SSL negotiation and proxy requests into our backend running Node.js over standard HTTP. This was working nicely for us and we didn’t have to handle any SSL certs in our Node.js codebase which was beneficial both for cleanliness and for performance.

Stochastic Fair Blue (SFB) for the Linux kernel Stochastic Fair Blue (SFB) is an active queue management algorithm for packet routers that attempts to simultaneously: bound queue length (and hence latency); minimise packet drops; maximise link utilisation; be fair to reactive aggregates; reliably detect non-reactive aggregates (aggregates that don't respond to congestion control indications) and put them into a so-called penalty box. SFB doesn't require much tuning, and uses a very small amount of memory for book-keeping. The main issue with SFB is that the notion of fairness it pursues is not necessarily the one you want: SFB enforces resource fairness between flows, meaning that all flows get roughly the same amount of buffer space. It is not clear how well this translates into fairness in throughput between flows, notably with varying RTT and different implementations of congestion control. In other words, SFB will neither guarantee the same rate in bytes per second or packets per second between flows, but it will come pretty close.

Hardening node.js for production part 2: using nginx to avoid node.js load This is part 2 of a quasi-series on hardening node.js for production systems (e.g. the Silly Face Society). The previous article covered a process supervisor that creates multiple node.js processes, listening on different ports for load balancing. This article will focus on HTTP: how to lighten the incoming load on node.js processes. Update: I’ve also posted a part 3 on zero downtime deployments in this setup.

RED in a Different Light « jg's Ramblings Update May 8, 2012: “Controlling Queue Delay” describes a new AQM algorithm by Kathie Nichols and Van Jacobson. Congestion and queue management in Internet routers has been a topic since the early years of the Internet and its first congestion collapse. “Congestion collapse” does not have the same visceral feel to most using the Internet today, as it does for a few of us older people. Using Node.js with NGINX on Debian Updated by Joseph Dooley Node.js is a JavaScript platform which can serve dynamic, responsive content. JavaScript is usually a client-side, browser language like HTML or CSS. However, Node.js is a server-side, JavaScript platform, comparable to PHP.

HFSC Scheduling with Linux © 2005 Klaus Rechert, Patrick McHardy © 2006 Martin A. Brown (translation) For complex traffic shaping scenarios, hierarchical algorithms are necessary. Current versions of Linux support the algorithms HTB and HFSC. While HTB basically rearranges token bucket filter (TBF) into a hierarchical structure, thereby retaining the principal characteristics of TBF, HFSC allows proportional distribution of bandwidth as well as control and allocation of latencies. This allows for better and more efficient use of a connection for situations in which both bandwidth intensive data services and interactive services share a single network link.

Optimising NginX, Node.JS and networking for heavy workloads Used in conjunction, NginX and Node.JS are the perfect partnership for high-throughput web applications. They’re both built using event-driven design principles and are able to scale to levels far beyond the classic C10K limitations afflicting standard web servers such as Apache. Out-of-the-box configuration will get you pretty far, but when you need to start serving upwards of thousands of requests per second on commodity hardware, there’s some extra tweaking you must perform to squeeze every ounce of performance out of your servers. This article assumes you’re using NginX’s HttpProxyModule to proxy your traffic to one or more upstream node.js servers. We’ll cover tuning sysctl settings in Ubuntu 10.04 and above, as well as node.js application and NginX tuning.

Configuring Nginx and SSL with Node.js Nginx is a high performance HTTP server as well as a reverse proxy. Unlike traditional servers, Nginx follows an event driven asynchronous architecture. As a result the memory footprint is low and performance is high. If you are running a Node.js based web app you should seriously consider using Nginx as a reverse proxy. Nginx can be very efficient in serving static assets. For all other requests it will talk to your Node.js backend and send the response to the client.

Real-time Web Applications with WebSockets and NGINX - NGINX In the blog post NGINX as a WebSockets Proxy we discussed using NGINX to proxy WebSocket application servers. In this post we will discuss some of the architecture and infrastructure issues to consider when creating real-time applications with WebSockets, including the components you will need and how you can structure your systems. WebSockets adds interactivity to HTTP HTTP works well for web applications that are request/response based, where the communications flow always has the client initiating a request and a backend server providing a response. If, however, a web application requires a more interactive, message based interaction between the client and server, something beyond simple HTTP is needed.

Using NGINX with Node.js and WebSockets with Socket.IO In this post we’ll talk about using NGINX with Node.js and socket.IO. Our post about building real-time web applications with WebSockets and NGINX has been quite popular, so in this post we’ll continue with documentation and best practices using socket.IO. What Is Socket.IO? Socket.IO is a WebSocket API that’s become quite popular with the rise of Node.js applications. The API is well known because it makes building realtime apps, like online games or chat, simple. Handle GET and POST Request in Express 4 As per the documentation GET request are meant to fetch data from specified resource and POST are meant to submit data to a specified resource. Express allows you to handle GET and POST request using the instance of express. Due to the depreciation of connect middle-ware handling POST request however seems confusing to many people. GET request: Handling GET request in Express seems so easy.

Related: