background preloader

Why Use Node.js? A Comprehensive Tutorial with Examples

Why Use Node.js? A Comprehensive Tutorial with Examples
Introduction JavaScript’s rising popularity has brought with it a lot of changes, and the face of web development today is dramatically different. The things that we can do on the web nowadays with JavaScript running on the server, as well as in the browser, were hard to imagine just several years ago, or were encapsulated within sandboxed environments like Flash or Java Applets. Before digging into Node.js, you might want to read up on the benefits of using JavaScript across the stack which unifies the language and data format (JSON), allowing you to optimally reuse developer resources. As Wikipedia states: “Node.js is a packaged compilation of Google’s V8 JavaScript engine, the libuv platform abstraction layer, and a core library, which is itself primarily written in JavaScript.” After over 20 years of stateless-web based on the stateless request-response paradigm, we finally have web applications with real-time, two-way connections. How Does It Work? That’s a mouthful. The list goes on.

Full Stack JavaScript: Backbone, Node, Express & More The Story So, you and your co-founder have this great idea for a business, right? You’ve been adding features in your mind. Frequently, you ask potential customers for their opinions, and they all love it. Ok, so people want it. So finally, you sit down one day and say, “Let’s do it!” “Done! “Ok, let’s create the site”, you say. And then, you realize the truth: you need to choose a programing language; you need to choose a (modern) platform; you need to choose some (modern) frameworks; you need to configure (and purchase) storage, databases, and hosting providers; you need an admin interface; you need a permissions system; you need a content manager. You want to be lean, you want to be agile. You have tens upon tens of architectural decisions to make. “I’m overwhelmed,” you say, as you feel overwhelmed. Your proof of concept slowly withers and dies. The Proposal After abandoning tons of ideas myself in this way, I decided to engineer a solution. Why I Chose JavaScript Single-Page Applications

What Makes Node.js Faster Than Java? Every few weeks someone posts a Java vs Node benchmark, like PayPal’s or Joey Whelan’s. As one of maintainers of Node core and contributors to many npm modules, StrongLoop is happy to see Node winning lately. Everyone knows benchmarks are a specific measurement and don’t account for all cases. High concurrency matters But there’s one thing we can all agree on: At high levels of concurrency (thousands of connections) your server needs to go to asynchronous non-blocking. While Java or Node or something else may win a benchmark, no server has the non-blocking ecosystem of Node.js today. Until Java or another language ecosystem gets to this level of support for the async pattern (a level we got to in Node because of async JavaScript in the browser), it won’t matter whether raw NIO performance is better than Node or any other benchmark result: Projects that need big concurrency will choose Node (and put up with its warts) because it’s the best way to get their project done. What’s next?

10 steps to nodejs nirvana in production | Thruput | Garbage In, Garbage Out We have been using node.js in production environments since a few years now, back when it was still in 0.4. We have used node for ecommerce, for ad-serving, as an API server and just about everything else, short of calculating the nth fibonacci number (we use GO for that sort of stuff, no kidding). When you run stuff in production, and at scale, there are lessons to be learned and insights to be gleaned, sometimes the hard way. For the impatient, here is the tl;dr Don’t reinvent the wheel, follow the unix way of doing things. As with any high availability system, you need to make sure that your node process is up all the time, and it starts at boot time. Upstart is not available everywhere, but our production environments has always been Ubuntu, where it is available by default. Using upstart is fairly simple, just place the config file in /etc/init. The -u switch is to set the uid of the node process, and -l and -e redirect stdout/stderr to files. To restart or stop Heartbeat Checks

code.tutsplus Koa.js is an expressive next-generation web framework written for Node.js by the people behind the Express and Connect frameworks. Koa.js leverages generators, which are a bleeding edge feature of JavaScript, and have not yet been made into stable versions of Node.js. Koa aims to use generators to save developers from the spaghetti of callbacks, making it less error-prone and thus more manageable. With just 550 lines of code, Koa is an extremely light framework. Now before we begin, you will need to have at least Node version 0.11.x or greater. You can install the latest version of Node using the N module : You can also use other community modules like nvm or build it from source. To run a JS file which makes use of generators, you need to provide the --harmony flag when you run it. For example, to run app.js, enter in the following command: Or to save yourself from entering this flag every time, you can create an alias using the following command: Very good! Example: sum.next(5);

Node.js in Production When running a node application in production, you need to keep stability, performance, security, and maintainability in mind. Outlined here is what I think are the best practices for putting node.js into production. By the end of this guide, this setup will include 3 servers: a load balancer (lb) and 2 app servers (app1 and app2). The load balancer will health check and balance traffic between the servers. The app servers will be using a combination of systemd and node cluster to load balance and route traffic around multiple node processes on the server. It will look roughly like this: Photo credit: Digital Ocean How this article is written This article is targeted to those with beginning operations experience. The final app is hosted here: For this guide I will be using Digital Ocean and Fedora. I will be working off of vanilla Digital Ocean Fedora 20 servers. Why Fedora? Install Node.js Later we’ll see how to automate this step with Ansible.

Fully Loaded Node – A Node.JS Holiday Season, part 2 Episode 2 in the A Node.JS Holiday Season series from Mozilla’s Identity team searches for an optimal server application architecture for computation heavy workloads.This is a prose version of a short talk given by Lloyd Hilaiel at Node Philly 2012 with the same title. A Node.JS process runs almost completely on a single processing core, because of this building scalable servers requires special care. With the ability to write native extensions and a robust set of APIs for managing processes, there are many different ways to design a Node.JS application that executes code in parallel: in this post we’ll evaluate these possible designs. This post also introduces the compute-cluster module: a small Node.JS library that makes it easy to manage a collection of processes to distribute computation. The Problem We chose Node.JS for Mozilla Persona, where we built a server that could handle a large number of requests with mixed characteristics. Approach 1: Just do it on the main thread. Next Steps

lloyd/node-compute-cluster Why you should use Node.js for CPU-bound tasks - Neil Kandalgaonkar In the first part of this discussion, I discussed the algorithms behind the Node.js-based Letterpress solver I wrote, called LetterPwn (source). But I wasn’t just interested in making a cheater for a word game – I wanted to explore what it would be like to try to write a computation-heavy service in Node.js. Many programmers – even Node.js aficionados – would say that’s a ridiculous thing to want to do. They would say it goes “against the grain” of the platform, which is just a way for inexperienced programmers to whip up simple servers that perform pretty well. I’ve got more experience than many, but I was interested in Node.js because, after all its infelicities, I still like JavaScript – it’s got just enough “good parts” to support functional programming, and modern implementations are blazing fast. After doing this project my tentative conclusion is that Node.js can be a great choice for computation-heavy services. The problem But not every website is like this. Being more cooperative

Node OS Node.js in Flames We’ve been busy building our next-generation Netflix.com web application using Node.js. You can learn more about our approach from the presentation we delivered at NodeConf.eu a few months ago. Today, I want to share some recent learnings from performance tuning this new application stack. We were first clued in to a possible issue when we noticed that request latencies to our Node.js application would increase progressively with time. Flames Rising We noticed that request latencies to our Node.js application would increase progressively with time. This graph plots request latency in ms for each region against time. Dousing the Fire Initially we hypothesized that there might be something faulty, such as a memory leak in our own request handlers that was causing the rising latencies. We saw that our request handler’s latencies stayed constant across the lifetime of the process at 1 ms. Something was taking an additional 60 ms to service the request. [a, b, c, c, c, c, d, e, f, g, h]

JXcore – A Node.JS Distribution with Multi-threading By Krzysztof Trzeciak JXcore is a fork on Node.JS 0.12 which is 100% compatible with Node.JS v0.12 and also with all the projects and modules around it. JXcore multithreading shouldn’t be confused with multiple threads running on top of the same JavaScript code. JXcore multithreading does not bring in any thread safety problem to your existing project. It creates a v8 thread pool and issues your tasks separately on different threads. Using the new multithreaded tasks API, you can simply customize the multithreading feature for your own usage. Note: mt-keep:4 means that the process will scale up to 4 threads. JXcore also includes a proprietary packaging system putting all the sources, assets and dependencies into a single file. Here is a list of benefits when using JXcore: Why We Created JXcore Early in 2013, we decided to extend our proprietary .NET based messaging technology and add features for turning it into a light-weight, easy-to-deploy mobile Backend-as-a-Service (mBaaS) platform.

BDD with MEAN – The Server Part 1 | attackOfZach As with any new endeavor, it pays to spend some time trying various solutions out and sometimes failing miserably. This is especially true for us progressive nerds who like to live on the bleeding edge without things like Stack Overflow to constantly save our ass. What I’d like to do is to help you avoid going through the pain of figuring out what works and what doesn’t. As I mentioned in my previous post, I already have a project that serves as a working example of if you wish to jump straight into the code: . The first step on our journey to effective BDD testing with the MEAN stack will be to start wiring up the various tools we’ll need to use to get a working environment. Let’s start by reviewing our toolbox: Our Tool Box GruntGrunt is used for general task running, . Folder structure The next most logical step is to establish our folder structure. The configuration file – config/default.yml The Gruntfile The World Summary

Related: