background preloader

Hardening node.js for production part 2: using nginx to avoid node.js load

Hardening node.js for production part 2: using nginx to avoid node.js load
This is part 2 of a quasi-series on hardening node.js for production systems (e.g. the Silly Face Society). The previous article covered a process supervisor that creates multiple node.js processes, listening on different ports for load balancing. This article will focus on HTTP: how to lighten the incoming load on node.js processes. Update: I’ve also posted a part 3 on zero downtime deployments in this setup. Our stack consists of nginx serving external traffic by proxying to upstream node.js processes running express.js. As I’ll explain, nginx is used for almost everything: gzip encoding, static file serving, HTTP caching, SSL handling, load balancing and spoon feeding clients. Too much talk. Also available as a gist. Perhaps this code dump isn’t particularly enlightening: I’ll try to step through the config and give pointers on how this balances the express.js code. The upstream directive specifies that these two instances work in tandem as an upstream server for nginx. Related:  nginx optimization

SPDY As of July 2012[update], the group developing SPDY has stated publicly that it is working toward standardisation (available as an Internet Draft).[3] The first draft of HTTP 2.0 is using SPDY as the working base for its specification draft and editing.[4] Design[edit] The goal of SPDY is to reduce web page load time.[9] This is achieved by prioritizing and multiplexing the transfer of web page subresources so that only one connection per client is required.[1][10] TLS encryption is nearly ubiquitous in SPDY implementations, and transmission headers are gzip- or DEFLATE-compressed by design[11] (in contrast to HTTP, where the headers are sent as human-readable text). Moreover, servers may hint or even push content instead of awaiting individual requests for each resource of a web page.[12] SPDY requires the use of SSL/TLS (with TLS extension NPN), and does not support operation over plain HTTP. Relation to HTTP[edit] Caching[edit] Protocol support[edit] Protocol versions[edit] See also[edit]

Relative Media Blog | Getting sailsjs and ghost to play nice on DigitalOcean through nginx So my front-end website uses SailsJs 0.9.3 and my blog uses Ghost. Since both are nodejs based I didn't want to have to host them on seperate droplets. I never tinkered with nginx or reverse proxying before but decided it was time. For those that aren't aware what reverse proxying is. Wikipedia explains it quit horribly.. In computer networks, a reverse proxy is a type of proxy server that retrieves resources on behalf of a client from one or more servers. Basically, nginx can listen on port 80 for incoming requests from and then forward them to some other server:port without the "client" knowing. This allows me to forward to localhost:1337 and to localhost:2368. Its very easy to setup, and these steps assume a few prerequisites. first install nginx and the required dependencies.. sudo apt-get install nginx; Now that nginx is installed, its time to write up some configuration to reverse proxy stuff out. sudo service nginx restart;

Node.js on multi-core machines NGINX as a SPDY load balancer for Node.js Recently we wanted to integrate SPDY into our stack at SocialRadar to make requests to our API a bit more speedy (hurr hurr). Particularly for multiple subsequent requests in rapid succession, avoiding that TCP handshake on every request would be quite nice. Android has supported SPDY in its networking library for a little while and iOS added SPDY support in iOS 8 so we could get some nice performance boosts on our two most used platforms. Previously, we had clients connecting via normal HTTPS on port 443 to an Elastic Load Balancer which would handle the SSL negotiation and proxy requests into our backend running Node.js over standard HTTP. However, when we wanted to enable SPDY, we discovered that AWS Elastic Load Balancers don’t support SPDYIn order for SPDY to work optimally, it would need an end-to-end channel[1] So, I set out to find alternatives. Ultimately I settled on using NGINX as it had SPDY proxy support, it’s fast, and it’s relatively easy to configure. Kudos

Nginx Load Balancer and Reverse Proxy for Node.js Applications On Digital Ocean Joe McCann Digitial Ocean is rad. A modern VPS with SSD servers for super cheap. Easy to spin up or down. I recently moved a bunch of my static sites to one machine on Digital Ocean. However, to make these sites highly available, I needed to reconfigure my infrastructure a bit. Machine 1 with Nginx installed (to act as load balancer and reverse proxy)Machine 2 with Node.js installed (to serve up the static sites)Machine 3 which is an exact clone of Machine 2 I created these "droplets", all running Ubuntu 13.04 x64, on Digital Ocean pretty easily and installed Nginx on machine 1 and node.js on machines 2 and 3. For all seven of the websites, I upated their respective A records to point to the load balancer's (machine 1) IP address. Machines 2 and 3 have their own respective IP addresses to which they are referred in the Nginx configuration files. For every site, I have an Nginx config file that is similar to the following: And voilà, there you have it.

multithreading - NodeJS in MultiCore System Using Node.js with NGINX on Debian Updated by Joseph Dooley Node.js is a JavaScript platform which can serve dynamic, responsive content. JavaScript is usually a client-side, browser language like HTML or CSS. However, Node.js is a server-side, JavaScript platform, comparable to PHP. Node.js often works with other popular server applications like NGINX or Apache. In this guide, NGINX is configured to handle front-end, static file requests, and Node.js is configured to handle back-end file requests. Install and Configure NGINX This guide can be started immediately after terminal login on a new Linode, it’s written for the root user. Create the Directories and HTML Index File NGINX is now configured. Create the /var/www and /var/www/ directories: Change the working directory: Create the HTML index file: /var/www/ Install Node.js and Write a Web Server NGINX is now listening on port 80 and serving content. Install the Node Version Manager: Close and reopen your terminal.

Deploy multiple Node applications on one web server in subdomains with Nginx Earlier I wrote about deploying multiple Node applications on one web server in subfolders with Nginx. Even though this approach is fully viable, you should not use it unless there are some really important reasons forcing you to go for it. Given that the application is mounted to a subfolder, you should use relative URLs only in pages. Otherwise the application location must be configured in both Nginx and application itself. Use of relative URLs has a couple of major drawbacks: If you want to move a page within hierarchy, you need to update its content. To minimize the maintenance complexity and avoid performance downgrade I decided to deploy Node applications in subdomains. Nginx configuration for this setup is very similar to the one with subfolders and even a little bit more simple: Finally you need to configure a DNS record for pet-project.myhost pointing to your server.