Welcome to Apache HBase™ Apache HBase™ is the Hadoop database, a distributed, scalable, big data store. When Would I Use Apache HBase? Use Apache HBase when you need random, realtime read/write access to your Big Data. This project's goal is the hosting of very large tables -- billions of rows X millions of columns -- atop clusters of commodity hardware.

HBase

HBase
PoweredBy Hadoop PoweredBy Hadoop This page documents an alphabetical list of institutions that are using Hadoop for educational or production uses. Companies that offer services on or based around Hadoop are listed in Distributions and Commercial Support . Please include details about your cluster hardware and size. Entries without this may be mistaken for spam references and deleted. To add entries you need write permission to the wiki, which you can get by subscribing to the common-dev@hadoop.apache.org mailing list and asking for permissions on the wiki account username you've registered yourself as. If you are using Hadoop in production you ought to consider getting involved in the development process anyway, by filing bugs, testing beta releases, reviewing the code and turning your notes into shared documentation.
All Things Distributed

All Things Distributed

We launched DynamoDB last year to address the need for a cloud database that provides seamless scalability, irrespective of whether you are doing ten transactions or ten million transactions, while providing rock solid durability and availability. Our vision from the day we conceived DynamoDB was to fulfil this need without limiting the query functionality that people have come to expect from a database. However, we also knew that building a distributed database that has unlimited scale and maintains predictably high performance while providing rich and flexible query capabilities, is one of the hardest problems in database development, and will take a lot of effort and invention from our team of distributed database engineers to solve.

Hadoop

What Is Apache Hadoop? The Apache™ Hadoop® project develops open-source software for reliable, scalable, distributed computing. The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage. Rather than rely on hardware to deliver high-availability, the library itself is designed to detect and handle failures at the application layer, so delivering a highly-available service on top of a cluster of computers, each of which may be prone to failures. Hadoop
PoweredBy - Hadoop Wiki
Traditional SQL databases, such as Oracle, DB2 and SQL server should be used for what they do best: managing a consistent and integrated transactional model.Their designs are adapted to application systems requiring a high level of consistency and guaranteed inter-table integrity. Consequently, they are not suitable for large queries concerning very large volumes of data.For 20 years, people have wanted to perform datawarehousing using types of databases that are not designed for that purpose. Publishers have done their best to convince users that their systems were up to the task, but it soon becomes clear that their monolithic design gives up the ghost quite quickly if there are more than a hundred million records to explore.Hadoop/Hbase: a new era.This is where the power of Hadoop/Hbase-type NoSQL and Big Data systems comes into play.These systems are designed for data distribution and processing on several machines. Big Data : comment modéliser ses données avec Hadoop-Hbase?(1/3) Big Data : comment modéliser ses données avec Hadoop-Hbase?(1/3)
Apache Mahout: Scalable machine learning and data mining

Apache Mahout: Scalable machine learning and data mining

The Apache Mahout™ machine learning library's goal is to build scalable machine learning libraries. Mahout currently has User and Item based recommenders Matrix factorization based recommenders K-Means, Fuzzy K-Means clustering Latent Dirichlet Allocation Singular value decomposition Logistic regression based classifier Complementary Naive Bayes classifier Random forest decision tree based classifier High performance java collections (previously colt collections) A vibrant community With scalable we mean:

ce que Hadoop ?

ce que Hadoop ? Data warehouse, stockage et traitement distribués Définition Hadoop Hadoop est un projet Open Source géré par Apache Software Fundation basé sur le principe Map Reduce et de Google File System, deux produits Google Corp.
What is Hadoop? Other big data terms like MapReduce?
Dhruba Borthakur, a Hadoop Engineer at Facebook, has published part of a paper he co-authored with several of his engineering co-workers on Apache Hadoop. The first part of the paper explains Facebook's requirements and non-requirements for a data store for its revamped Facebook Messages application and the reasons it chose Apache Hadoop to power it. The paper will be published at SIGMOD 2011. Why Facebook Uses Apache Hadoop and HBase Why Facebook Uses Apache Hadoop and HBase
MapReduce

MapReduce

Un article de Wikipédia, l'encyclopédie libre. Les termes « map » et « reduce », et les concepts sous-jacents, sont empruntés aux langages de programmation fonctionnelle utilisés pour leur construction (map et réduction de la programmation fonctionnelle et des langages de programmation tableau). MapReduce permet de manipuler de grandes quantités de données en les distribuant dans un cluster de machines pour être traitées. Ce modèle connaît un vif succès auprès de sociétés possédant d'importants centres de traitement de données telles Amazon ou Facebook. Il commence aussi à être utilisé au sein du Cloud computing. De nombreux frameworks ont vu le jour afin d'implémenter le MapReduce.