BrowserQuest/server at master · mozilla/BrowserQuest. Introduction to npm. Static Version This was the third in a series of posts leading up to Node.js Knockout on how to use node.js. npm is a NodeJS package manager.
As its name would imply, you can use it to install node programs. Also, if you use it in development, it makes it easier to specify and link dependencies. Installing npm First of all, install NodeJS. To install npm in one command, you can do this: curl | sh Of course, if you're more paranoid than lazy, you can also get the latest code, check it all out, and when you're happy there's nothing in there to pwn your machine, issue a make install or make dev. what, no sudo?
I strongly encourage you not to do package management with sudo! I recommend doing this once instead: sudo chown -R $USER /usr/local That sets your user account as the owner of the /usr/local directory, so that you can just issue normal commands in there.
Why MongoDB? While looking into storage solutions we decided that using a NoSQL engine was the best fit for our needs. MongoDB We ended up choosing Mongo for a variety of reasons: commercial support fast Example. Getting Started with MongoDB and Node.js. Introduction to Node.js: perspectives from a Drupal dev. Node.js: uses and ideas? Pixel Ping: A node.js Stats Tracker.
Since the day we launched, ProPublica has encouraged people to republish our stories for free.
We even license our stories under Creative Commons (CC). However, in the past we've had trouble knowing precisely which stories had been republished where, and we had no way of knowing how many people were reading our stories on sites that republished them under our CC license. Shortly after the redesign of our site, we started working on a system that would help us solve this problem. When we found out that Jeremy Ashkenas, a developer at DocumentCloud, was working on a similar problem, we joined forces, and finished work on a lightweight stats tracker, which we are open sourcing today.
" Well, I just gathered 20 more projects that I had done (or did recently) and pushed them all to GitHub. Quick note on GitHub - GitHub is the best invention ever for programmers. Nothing stimulates you more than pushing more and more projects to GitHub and seeing people forking them, following them, finding and fixing bugs for you. I wouldn't be doing so much coding if there wasn't GitHub. If you like my projects, I'd love if you followed me on github! Right, so here are the new projects: How To Node - NodeJS. Parsing file uploads at 500 mb/s with node.js » Debuggable Ltd. A few weeks ago I set out to create a new multipart/form-data parser for node.js.
We need this parser for the new version of transloadit that we have been working on since our setback last month. The result is a new library called formidable, which, on a high level, makes receiving file uploads with node.js as easy as: var formidable = require('formidable') , http = require('http') , sys = require('sys'); Essentially this works similar to other platforms where file uploads are saved to disk before your script is invoked with a path to the uploaded file. What's nice about this however, is that you can hook into the whole thing on a lower level: We use that interface for processing HTML5 multi-file uploads as they come in, rather than waiting for the entire upload to finish. You could even overwrite the onPart handler, which gives you direct access to the raw data stream: All of this is possible thanks to the underlaying multipart parser which makes heavy use of node.js buffers.
--fg. Nodejitsu/node-http-proxy - GitHub.