background preloader

Map-reduce-hadoop

Facebook Twitter

Votre première installation Hadoop. Cet article est pensé pour vous aider à affronter le baptême du feu : l’installation de la plate-forme.

Votre première installation Hadoop

Quelle distribution choisir ? La première question à se poser lorsque l’on choisit sa distribution Hadoop est celle du support. En effet, sur la version packagée par Apache, il est difficile de se procurer un support efficace et digne de ce nom. Les principaux contributeurs au projet Hadoop sont tous salariés d’entreprises offrant un support commercial, mais uniquement sur leur propre distribution.

Les trois principaux acteurs de ce marché sont : Cloudera, avec la Cloudera Hadoop Distribution, actuellement en version 4 (CDH4), qui package Hadoop 2.0 ;HortonWorks, qui package Hadoop 1.0.3 ;MapR, qui propose lui aussi une distribution autour de Hadoop 2. Hormis l’accès à un support, ces distributions offrent toutes un gros effort de packaging de l’écosystème Hadoop, c’est à dire Hadoop en lui même, mais aussi ses satellites, comme HBase, Hive ou encore Pig.

Le choix des machines Le monitoring. The 7 most common Hadoop and Spark projects. There's an old axiom that goes something like this: If you offer someone your full support and financial backing to do something different and innovative, they’ll end up doing what everyone else is doing.

The 7 most common Hadoop and Spark projects

So it goes with Hadoop, Spark, and Storm. Everyone thinks they're doing something special with these new big data technologies, but it doesn't take long to encounter the same patterns over and over. Specific implementations may differ somewhat, but based on my experience, here are the seven most common projects. Project No. 1: Data consolidation Call it an "enterprise data hub" or "data lake. " 18 essential Hadoop tools for crunching big data. Getting Started with Hadoop.

Mongodb hadoop tutorial. Introduction to Hadoop and MapReduce. Udacity Course on Writing Your First Big Data Program.

Introduction to Hadoop and MapReduce

Traitements Big Data avec Apache Spark - 1ère partie : Introduction. Spark SQL and DataFrames - Spark 1.4.0 Documentation. Spark SQL is a Spark module for structured data processing.

Spark SQL and DataFrames - Spark 1.4.0 Documentation

It provides a programming abstraction called DataFrames and can also act as distributed SQL query engine. For how to enable Hive support, please refer to the Hive Tables section. A DataFrame is a distributed collection of data organized into named columns. It is conceptually equivalent to a table in a relational database or a data frame in R/Python, but with richer optimizations under the hood. DataFrames can be constructed from a wide array of sources such as: structured data files, tables in Hive, external databases, or existing RDDs.

The DataFrame API is available in Scala, Java, Python, and R. Tutoriel VirtualBox : installer une machine virtuelle Ubuntu. << Sommaire du tutoriel VirtualBox< 2.

Tutoriel VirtualBox : installer une machine virtuelle Ubuntu

Comment télécharger et installer VirtualBox sur Windows, Mac, ou Linux. Tutoriel Virtualbox. Resilient Distributed Datasets: A Fault-Tolerant Abstraction for In-Memory Cluster Computing. We present Resilient Distributed Datasets (RDDs), a distributed memory abstraction that lets programmers perform in-memory computations on large clusters in a fault-tolerant manner.

Resilient Distributed Datasets: A Fault-Tolerant Abstraction for In-Memory Cluster Computing

RDDs are motivated by two types of applications that current computing frameworks handle inefficiently: iterative algorithms and interactive data mining tools. In both cases, keeping data in memory can improve performance by an order of magnitude. To achieve fault tolerance efficiently, RDDs provide a restricted form of shared memory, based on coarse-grained transformations rather than fine-grained updates to shared state. However, we show that RDDs are expressive enough to capture a wide class of computations, including recent specialized programming models for iterative jobs, such as Pregel, and new applications that these models do not capture. Présentation et mise en place de vagrant. Vagrant, enlarge your VM. Vagrant, au cas où tu ne connaîtrais pas encore, permet de fournir des environnements de développements reproductibles, facilement configurables et qui se partagent entre les membres de l’équipe.

Vagrant, enlarge your VM

En gros, tu vas pouvoir décrire et configurer des machines virtuelles (VM) depuis un seul fichier texte, le Vagrantfile. Plutôt pratique pour avoir un environnement de dev équivalent à celui de la prod. Et tout cela avec un processus simplifié à l’extrême. Vagrant s’adresse principalement à toi, mais aussi à un public de développeurs qui souhaitent pouvoir mettre en place rapidement un environnement de dev avec une machine virtuelle (genre Apache-PHP-SQL) sans y passer trop de temps.

Vagrant, c’est aussi pour des devops qui voudraient tester la mise en place et le provisioning de leur infra. Globalement le workflow Vagrant se résume à 2-3 commandes : vagrant init au début du projet, puis ;vagrant up pour lancer la VM, et ;vagrant haltpour l’arrêter. # La config de la box # Le provisioning de box. Spark_tutorial_student.

During this tutorial we will cover:¶ Part 1: Basic notebook usage and Python integration¶ Part 2: An introduction to using Apache Spark with the Python pySpark API running in the browser¶ Part 3: Using RDDs and chaining together transformations and actions¶

spark_tutorial_student

Initiation au MapReduce avec Apache Spark. Dans le précédent post, nous avons utilisé l’opération Map qui permet de transformer des valeurs à l’aide d’une fonction de transformation.

Initiation au MapReduce avec Apache Spark

Présentation "Manifeste de Chris Date sur modèle « Objet Relationnel » (pour données structurées/SQL) Professeur Serge Miranda serge.miranda Directeur Master." M2 MBDS - Hadoop / Big Data. RDF Schema 1.1. RDF 1.1 Concepts and Abstract Syntax. Abstract The Resource Description Framework (RDF) is a framework for representing information in the Web.

RDF 1.1 Concepts and Abstract Syntax

This document defines an abstract syntax (a data model) which serves to link all RDF-based languages and specifications. The abstract syntax has two key data structures: RDF graphs are sets of subject-predicate-object triples, where the elements may be IRIs, blank nodes, or datatyped literals. Bib. Map Reduce - A really simple introduction « Kaushik Sathupadi. Ever since google published its research paper on map reduce, you have been hearing about it. Here and there. If you have uptil now considered map-reduce a mysterious buzzword, and ignored it, Know that its not. Map / Reduce – A visual explanation. Map/Reduce is a term commonly thrown about these days, in essence, it is just a way to take a big task and divide it into discrete tasks that can be done in parallel.

A common use case for Map/Reduce is in document database, which is why I found myself thinking deeply about this. Let us say that we have a set of documents with the following form: Data-Intensive Text Processing with MapReduce.