Reader (121) Google Reader has been discontinued. We want to thank all our loyal fans. We understand you may not agree with this decision, but we hope you'll come to love these alternatives as much as you loved Reader. Reader (121)
MapReduce is a powerful programming framework for efficiently processing very large amounts of data stored in the Hadoop distributed filesystem. But while several programming frameworks for Hadoop exist, few are tuned to the needs of data analysts who typically work in the R environment as opposed to general-purpose languages like Java. That's why the dev team at Revolution Analytics created the RHadoop project, to give R progammers powerful open-source tools to analyze data stored in Hadoop. RHadoop provides a new R package called rmr, whose goals are: To provide map-reduce programmers the easiest, most productive, most elegant way to write map reduce jobs. How to Program MapReduce Jobs in Hadoop with R How to Program MapReduce Jobs in Hadoop with R
La NSA soumet sa base de données Hadoop à la Fondation Apache::Gestion des données Le doux vent de l'été souffle encore sur Hadoop. Ce framework Java Open Source pour le développement de systèmes de fichiers distribués et de gestion de données en volume a reçu cette semaine un nouvel allié de poids : la NSA (National Security Agency). L'agence de sécurité nationale américaine, dont une des missions clés est de collecter, d'analyser et de surveiller les communications militaires, gouvernementales, commerciales et personnelles des Etats-Unis, a soumis son projet de base de données NoSQL Accumulo à la très populaire fondation Apache. Le projet, aujourd'hui placé dans l'incubateur de la fondation Open Source, doit désormais trouver sa communauté. Démarré en 2008, Accumulo, riche de quelque 200 000 lignes de code (Java essentiellement), est le résultat de trois années de développement initiées par la NSA, explique l'agence sur le site de la fondation. La NSA soumet sa base de données Hadoop à la Fondation Apache::Gestion des données
When Google launched its image search feature in 2001, it had 250 million indexed images. Less than a decade later, the search giant has indexed over 10 billion images. Thirty-five hours of content are uploaded to YouTube every minute. Twitter is said to handle, on average, 55 million tweets per day. Earlier this year, its search feature was logging 600 million queries daily. Java development 2.0: Big data analysis with Hadoop MapReduce Java development 2.0: Big data analysis with Hadoop MapReduce
Why Europe’s Largest Ad Targeting Platform Uses Hadoop Why Europe’s Largest Ad Targeting Platform Uses Hadoop Richard Hutton, CTO of nugg.ad, authored the following post about how and why his company uses Apache Hadoop. nugg.ad operates Europe’s largest targeting platform. The company’s core business is to derive targeting recommendations from clicks and surveys. We measure these, store them in log files and later make sense of them all.
Running Hadoop On Ubuntu Linux (Single-Node Cluster) @ Michael G. Noll In this tutorial I will describe the required steps for setting up a pseudo-distributed, single-node Hadoop cluster backed by the Hadoop Distributed File System, running on Ubuntu Linux. Hadoop is a framework written in Java for running applications on large clusters of commodity hardware and incorporates features similar to those of the Google File System (GFS) and of the MapReduce computing paradigm. Hadoop’s HDFS is a highly fault-tolerant distributed file system and, like Hadoop in general, designed to be deployed on low-cost hardware. It provides high throughput access to application data and is suitable for applications that have large data sets.

Running Hadoop On Ubuntu Linux (Single-Node Cluster) @ Michael G. Noll

Running Hadoop On Ubuntu Linux (Multi-Node Cluster) @ Michael G. Noll In this tutorial I will describe the required steps for setting up a distributed, multi-nodeApache Hadoop cluster backed by the Hadoop Distributed File System (HDFS), running on Ubuntu Linux. Hadoop is a framework written in Java for running applications on large clusters of commodity hardware and incorporates features similar to those of the Google File System (GFS) and of the MapReduce computing paradigm. Hadoop’s HDFS is a highly fault-tolerant distributed file system and, like Hadoop in general, designed to be deployed on low-cost hardware. It provides high throughput access to In a previous tutorial, I described how to setup up a Hadoop single-node cluster on an Ubuntu box. Running Hadoop On Ubuntu Linux (Multi-Node Cluster) @ Michael G. Noll
Hadoop Tutorial Introduction HDFS, the Hadoop Distributed File System, is a distributed file system designed to hold very large amounts of data (terabytes or even petabytes), and provide high-throughput access to this information. Files are stored in a redundant fashion across multiple machines to ensure their durability to failure and high availability to very parallel applications. Hadoop Tutorial