background preloader

Scaling & Performance

Facebook Twitter

Increase ghost and expressjs performance with some Nginx tuning. Ghost blog platform is shiny, new and the markdown editor is GREAT! As a web-performance addict, I wondered how well my DigitalOcean small instance ghost blog was performing. load test I did a load test using blitz.io which is a very simple load testing tool. I selected Ireland location, it makes more sense since my server is in Europe. I used the advanced preferences to add the Accept-Encoding: gzip,deflate,sdch header to the load test so that my nginx server sends gzip and not plain text (which would not be a real usecase). Let's see the results. The test dispatched 1000 users making requests in one minute, here is the resulting graph: Yes, blitz.io has really nice graphs!

What it shows: everything looks good unless we reach 150 hits/sat 200 hits/s we get more timeouts than hitsat 500 users, we get more errors and timeouts than hits We're not very good and if you were to link this blog post from a big news website, I get in trouble. Stats: 152ms response time206 hits/s12.61mb transfered tune nginx. Optimising NginX, Node.JS and networking for heavy workloads | GoSquared Engineering. Used in conjunction, NginX and Node.JS are the perfect partnership for high-throughput web applications.

They’re both built using event-driven design principles and are able to scale to levels far beyond the classic C10K limitations afflicting standard web servers such as Apache. Out-of-the-box configuration will get you pretty far, but when you need to start serving upwards of thousands of requests per second on commodity hardware, there’s some extra tweaking you must perform to squeeze every ounce of performance out of your servers. This article assumes you’re using NginX’s HttpProxyModule to proxy your traffic to one or more upstream node.js servers. We’ll cover tuning sysctl settings in Ubuntu 10.04 and above, as well as node.js application and NginX tuning. Tuning the network Meticulous configuration of Nginx and Node.js would be futile without first understanding and optimising the transport mechanism over which traffic data is sent.

Higlighting a few of the important ones… ss -s.

Multi-CPU

Blazing fast node.js: 10 performance tips from LinkedIn Mobile. In a previous post, we discussed how we test LinkedIn's mobile stack, including our Node.js mobile server. Today, we’ll tell you how we make this mobile server fast. Here are our top 10 performance takeaways for working with Node.js: 1. Avoid synchronous code By design, Node.js is single threaded. To allow a single thread to handle many concurrent requests, you can never allow the thread to wait on a blocking, synchronous, or long running operation. A distinguishing feature of Node.js is that it was designed and implemented from top to bottom to be asynchronous. Unfortunately, it is still possible to make synchronous/blocking calls. Our initial logging implementation accidentally included a synchronous call to write to disc. 2. The Node.js http client automatically uses socket pooling: by default, this limits you to 5 sockets per host. 3.

For static assets, such as CSS and images, use a standard webserver instead of Node.js. 4. 5. 6. 7. 8. 9. 10. Try it out. Multithreading - NodeJS in MultiCore System. Node.js on multi-core machines. Hardening node.js for production part 2: using nginx to avoid node.js load | Arg! Team Blog. This is part 2 of a quasi-series on hardening node.js for production systems (e.g. the Silly Face Society). The previous article covered a process supervisor that creates multiple node.js processes, listening on different ports for load balancing. This article will focus on HTTP: how to lighten the incoming load on node.js processes. Update: I’ve also posted a part 3 on zero downtime deployments in this setup. Our stack consists of nginx serving external traffic by proxying to upstream node.js processes running express.js.

As I’ll explain, nginx is used for almost everything: gzip encoding, static file serving, HTTP caching, SSL handling, load balancing and spoon feeding clients. The idea is use nginx to prevent unnecessary traffic from hitting our node.js processes. Furthermore, we remove as much overhead as possible for traffic that has to hit node.js. Too much talk. Also available as a gist. upstream alone is not sufficient – nginx needs to know how and when to route traffic to node.