background preloader

Nginx optimization

Facebook Twitter

Handle GET and POST Request in Express 4. As per the documentation GET request are meant to fetch data from specified resource and POST are meant to submit data to a specified resource.

Handle GET and POST Request in Express 4

Express allows you to handle GET and POST request using the instance of express. Due to the depreciation of connect middle-ware handling POST request however seems confusing to many people. GET request: Handling GET request in Express seems so easy. You have to create instance of express and call get method. Var express = require("express");var app = express(); app.get('handle',function(request,response){//code to perform particular action. GET request can be cached and remains in browser history. POST Request: Express version 4 and above requires extra middle-ware layer to handle POST request.

You can install bodyParser by two way. Sudo npm install --save body-parser. Nginx. Using NGINX with Node.js and WebSockets with Socket.IO. In this post we’ll talk about using NGINX with Node.js and socket.IO.

Using NGINX with Node.js and WebSockets with Socket.IO

Our post about building real-time web applications with WebSockets and NGINX has been quite popular, so in this post we’ll continue with documentation and best practices using socket.IO. What Is Socket.IO? Socket.IO is a WebSocket API that’s become quite popular with the rise of Node.js applications. The API is well known because it makes building realtime apps, like online games or chat, simple. Real-time Web Applications with WebSockets and NGINX - NGINX. In the blog post NGINX as a WebSockets Proxy we discussed using NGINX to proxy WebSocket application servers.

Real-time Web Applications with WebSockets and NGINX - NGINX

In this post we will discuss some of the architecture and infrastructure issues to consider when creating real-time applications with WebSockets, including the components you will need and how you can structure your systems. WebSockets adds interactivity to HTTP HTTP works well for web applications that are request/response based, where the communications flow always has the client initiating a request and a backend server providing a response. If, however, a web application requires a more interactive, message based interaction between the client and server, something beyond simple HTTP is needed. In the past there have been techniques used to simulate full-duplex communications over HTTP, but these are complicated and have many drawbacks. Online GamesChatStock TrackingSports Scores Another area where WebSockets can be used is when developing WebRTC based applications.

Configuring Nginx and SSL with Node.js. Nginx is a high performance HTTP server as well as a reverse proxy.

Configuring Nginx and SSL with Node.js

Unlike traditional servers, Nginx follows an event driven asynchronous architecture. Configuring Nginx and SSL with Node.js. Using Node.js with NGINX on Debian. Updated by Joseph Dooley.

Using Node.js with NGINX on Debian

Hardening node.js for production part 2: using nginx to avoid node.js load. This is part 2 of a quasi-series on hardening node.js for production systems (e.g. the Silly Face Society).

Hardening node.js for production part 2: using nginx to avoid node.js load

The previous article covered a process supervisor that creates multiple node.js processes, listening on different ports for load balancing. This article will focus on HTTP: how to lighten the incoming load on node.js processes. Update: I’ve also posted a part 3 on zero downtime deployments in this setup. Hardening node.js for production part 2: using nginx to avoid node.js load. SPDY. As of July 2012[update], the group developing SPDY has stated publicly that it is working toward standardisation (available as an Internet Draft).[3] The first draft of HTTP 2.0 is using SPDY as the working base for its specification draft and editing.[4] Design[edit] The goal of SPDY is to reduce web page load time.[9] This is achieved by prioritizing and multiplexing the transfer of web page subresources so that only one connection per client is required.[1][10] TLS encryption is nearly ubiquitous in SPDY implementations, and transmission headers are gzip- or DEFLATE-compressed by design[11] (in contrast to HTTP, where the headers are sent as human-readable text).

SPDY

Moreover, servers may hint or even push content instead of awaiting individual requests for each resource of a web page.[12] SPDY requires the use of SSL/TLS (with TLS extension NPN), and does not support operation over plain HTTP. Relation to HTTP[edit] Caching[edit] Protocol support[edit] Protocol versions[edit] See also[edit] NGINX as a SPDY load balancer for Node.js. Recently we wanted to integrate SPDY into our stack at SocialRadar to make requests to our API a bit more speedy (hurr hurr).

NGINX as a SPDY load balancer for Node.js

Particularly for multiple subsequent requests in rapid succession, avoiding that TCP handshake on every request would be quite nice. Android has supported SPDY in its networking library for a little while and iOS added SPDY support in iOS 8 so we could get some nice performance boosts on our two most used platforms. Previously, we had clients connecting via normal HTTPS on port 443 to an Elastic Load Balancer which would handle the SSL negotiation and proxy requests into our backend running Node.js over standard HTTP.

This was working nicely for us and we didn’t have to handle any SSL certs in our Node.js codebase which was beneficial both for cleanliness and for performance. However, when we wanted to enable SPDY, we discovered that AWS Elastic Load Balancers don’t support SPDYIn order for SPDY to work optimally, it would need an end-to-end channel[1] proxy_pass. NGINX as a SPDY load balancer for Node.js. Optimising NginX, Node.JS and networking for heavy workloads. Used in conjunction, NginX and Node.JS are the perfect partnership for high-throughput web applications.

Optimising NginX, Node.JS and networking for heavy workloads

They’re both built using event-driven design principles and are able to scale to levels far beyond the classic C10K limitations afflicting standard web servers such as Apache. Out-of-the-box configuration will get you pretty far, but when you need to start serving upwards of thousands of requests per second on commodity hardware, there’s some extra tweaking you must perform to squeeze every ounce of performance out of your servers. This article assumes you’re using NginX’s HttpProxyModule to proxy your traffic to one or more upstream node.js servers. We’ll cover tuning sysctl settings in Ubuntu 10.04 and above, as well as node.js application and NginX tuning. You may be able to achieve similar results if you’re using a Debian Linux distribution, but YMMV if you’re using something else.

Tuning the network Higlighting a few of the important ones…