background preloader


As of July 2012[update], the group developing SPDY has stated publicly that it is working toward standardisation (available as an Internet Draft).[3] The first draft of HTTP 2.0 is using SPDY as the working base for its specification draft and editing.[4] Design[edit] The goal of SPDY is to reduce web page load time.[9] This is achieved by prioritizing and multiplexing the transfer of web page subresources so that only one connection per client is required.[1][10] TLS encryption is nearly ubiquitous in SPDY implementations, and transmission headers are gzip- or DEFLATE-compressed by design[11] (in contrast to HTTP, where the headers are sent as human-readable text). Moreover, servers may hint or even push content instead of awaiting individual requests for each resource of a web page.[12] SPDY requires the use of SSL/TLS (with TLS extension NPN), and does not support operation over plain HTTP. Relation to HTTP[edit] Caching[edit] Protocol support[edit] Protocol versions[edit] See also[edit] Related:  nginx optimization

NGINX as a SPDY load balancer for Node.js Recently we wanted to integrate SPDY into our stack at SocialRadar to make requests to our API a bit more speedy (hurr hurr). Particularly for multiple subsequent requests in rapid succession, avoiding that TCP handshake on every request would be quite nice. Android has supported SPDY in its networking library for a little while and iOS added SPDY support in iOS 8 so we could get some nice performance boosts on our two most used platforms. Previously, we had clients connecting via normal HTTPS on port 443 to an Elastic Load Balancer which would handle the SSL negotiation and proxy requests into our backend running Node.js over standard HTTP. However, when we wanted to enable SPDY, we discovered that AWS Elastic Load Balancers don’t support SPDYIn order for SPDY to work optimally, it would need an end-to-end channel[1] So, I set out to find alternatives. Ultimately I settled on using NGINX as it had SPDY proxy support, it’s fast, and it’s relatively easy to configure. Kudos

Hardening node.js for production part 2: using nginx to avoid node.js load | Arg! Team Blog This is part 2 of a quasi-series on hardening node.js for production systems (e.g. the Silly Face Society). The previous article covered a process supervisor that creates multiple node.js processes, listening on different ports for load balancing. This article will focus on HTTP: how to lighten the incoming load on node.js processes. Our stack consists of nginx serving external traffic by proxying to upstream node.js processes running express.js. Too much talk. Also available as a gist. Perhaps this code dump isn’t particularly enlightening: I’ll try to step through the config and give pointers on how this balances the express.js code. The nginx <-> node.js link First things first: how can we get nginx to proxy / load balance traffic to our node.js instances? The upstream directive specifies that these two instances work in tandem as an upstream server for nginx. upstream alone is not sufficient – nginx needs to know how and when to route traffic to node. We are almost there.

Using Node.js with NGINX on Debian Updated by Joseph Dooley Node.js is a JavaScript platform which can serve dynamic, responsive content. JavaScript is usually a client-side, browser language like HTML or CSS. Install and Configure NGINX This guide can be started immediately after terminal login on a new Linode, it’s written for the root user. Install: Start NGINX: Change the working directory to the NGINX sites-available directory: Create a new sites-available file, replacing with your domain or IP address: /etc/nginx/sites-available/ Change the working directory to the NGINX sites-enabled directory: Create a symlink to the new example sites-available file: Remove the default symlink: Load the new NGINX configuration: Create the Directories and HTML Index File NGINX is now configured. Create the /var/www and /var/www/ directories: Change the working directory: Create the HTML index file: /var/www/ Install Node.js and Write a Web Server

Optimising NginX, Node.JS and networking for heavy workloads | GoSquared Engineering Used in conjunction, NginX and Node.JS are the perfect partnership for high-throughput web applications. They’re both built using event-driven design principles and are able to scale to levels far beyond the classic C10K limitations afflicting standard web servers such as Apache. Out-of-the-box configuration will get you pretty far, but when you need to start serving upwards of thousands of requests per second on commodity hardware, there’s some extra tweaking you must perform to squeeze every ounce of performance out of your servers. This article assumes you’re using NginX’s HttpProxyModule to proxy your traffic to one or more upstream node.js servers. Tuning the network Meticulous configuration of Nginx and Node.js would be futile without first understanding and optimising the transport mechanism over which traffic data is sent. Your system imposes a variety of thresholds and limits on TCP traffic, dictated by its kernel parameter configuration. Higlighting a few of the important ones…

Configuring Nginx and SSL with Node.js Nginx is a high performance HTTP server as well as a reverse proxy. Unlike traditional servers, Nginx follows an event driven asynchronous architecture. As a result the memory footprint is low and performance is high. If you are running a Node.js based web app you should seriously consider using Nginx as a reverse proxy. Nginx can be very efficient in serving static assets. Installing Nginx Assuming you already have Node.js installed on your machine let’s see how to install Nginx. Installation on Mac If you are on Mac you can use Homebrew to install Nginx easily. Homebrew needs the directory /usr/local to be chown‘d to your username. Now the following two commands will install Nginx on your system. Once the installation is complete you can type the following command to start Nginx: The Nginx config file can be found here: /usr/local/etc/nginx/nginx.conf. Installation on Ubuntu If you are running Ubuntu you can use the following command to install Nginx: Installation on Windows Mac: Ubuntu: Or

Real-time Web Applications with WebSockets and NGINX - NGINX In the blog post NGINX as a WebSockets Proxy we discussed using NGINX to proxy WebSocket application servers. In this post we will discuss some of the architecture and infrastructure issues to consider when creating real-time applications with WebSockets, including the components you will need and how you can structure your systems. WebSockets adds interactivity to HTTP HTTP works well for web applications that are request/response based, where the communications flow always has the client initiating a request and a backend server providing a response. The ability to create a full-duplex socket connection between the client and server allows for the development of real-time event-driven web applications that utilize push, poll or streaming communications. Online GamesChatStock TrackingSports Scores Another area where WebSockets can be used is when developing WebRTC based applications. Technical requirements for a WebSockets Application Scaling WebSockets to High Traffic Volumes

Using NGINX with Node.js and WebSockets with Socket.IO In this post we’ll talk about using NGINX with Node.js and socket.IO. Our post about building real-time web applications with WebSockets and NGINX has been quite popular, so in this post we’ll continue with documentation and best practices using socket.IO. What Is Socket.IO? Socket.IO is a WebSocket API that’s become quite popular with the rise of Node.js applications. Why Use NGINX for Node.js and Socket.IO? When your application is in production it likely needs to be running on port 80, 443, or both. Socket.IO Configuration After installing Node.js following these instructions, you can install socket.IO by running npm install For this example, we will assume that the socket.IO server is running on port 5000 for your realtime app. In the file that is delivered to your client, such as index.html, now add javascript, such as the following. <script src="/"></script><script> var socket = io(); // your initialization code here. NGINX Configuration Download ebook

nginx Handle GET and POST Request in Express 4 As per the documentation GET request are meant to fetch data from specified resource and POST are meant to submit data to a specified resource. Express allows you to handle GET and POST request using the instance of express. Due to the depreciation of connect middle-ware handling POST request however seems confusing to many people. GET request: Handling GET request in Express seems so easy. var express = require("express");var app = express(); app.get('handle',function(request,response){//code to perform particular action. GET request can be cached and remains in browser history. POST Request: Express version 4 and above requires extra middle-ware layer to handle POST request. You can install bodyParser by two way. sudo npm install --save body-parser You have to import this package in your project and tell Express to use this as middle-ware. Once configured you can use express router to handle POST request. In this way you can handle the GET and POST request in Express 4. server.js