background preloader

Apache CouchDB: The CouchDB Project

Apache CouchDB: The CouchDB Project
A Database for the Web CouchDB is a database that completely embraces the web. Store your data with JSON documents. Access your documents and query your indexes with your web browser, via HTTP. Index, combine, and transform your documents with JavaScript. CouchDB works well with modern web and mobile apps.

http://couchdb.apache.org/

Understanding HBase and BigTable - Jimbojw.com From Jimbojw.com The hardest part about learning HBase (the open source implementation of Google's BigTable), is just wrapping your mind around the concept of what it actually is. I find it rather unfortunate that these two great systems contain the words table and base in their names, which tend to cause confusion among RDBMS indoctrinated individuals (like myself). This article aims to describe these distributed data storage systems from a conceptual standpoint. After reading it, you should be better able to make an educated decision regarding when you might want to use HBase vs when you'd be better off with a "traditional" database.

K Means Clustering with Tf-idf Weights Unsupervised learning algorithms in machine learning impose structure on unlabeled datasets. In Prof. Andrew Ng's inaugural ml-class from the pre-Coursera days, the first unsupervised learning algorithm introduced was k-means, which I implemented in Octave for programming exercise 7. Now, after the fact but with a fresh perspective and more experience, I will revisit the k-means algorithm in Java to implement text clustering. Concretely! Introduction to Information Retrieval This is the companion website for the following book. Christopher D. Manning, Prabhakar Raghavan and Hinrich Schütze, Introduction to Information Retrieval, Cambridge University Press. 2008. You can order this book at CUP, at your local bookstore or on the internet. The best search term to use is the ISBN: 0521865719.

Cloudera Professional Services Hadoop for the Enterprise Cloudera Enterprise helps you become information-driven by leveraging the best of the open source community with the enterprise capabilities you need to succeed with Apache Hadoop in your organization. Designed specifically for mission-critical environments, Cloudera Enterprise includes CDH, the world’s most popular open source Hadoop-based platform, as well as advanced system management and data management tools plus dedicated support and community advocacy from our world-class team of Hadoop developers and experts. Cloudera is your partner on the path to big data.

simple web crawler / scraper tutorial using requests module in python Let me show you how to use the Requests python module to write a simple web crawler / scraper. So, lets define our problem first. In this page: I am publishing some programming problems. So, now I shall write a script to get the links (url) of the problems. So, lets start. First make sure you can get the content of the page. » How to make a web crawler in under 50 lines of Python code 'Net Instructions How to make a web crawler in under 50 lines of Python code September 24, 2011 Interested to learn how Google, Bing, or Yahoo work? Wondering what it takes to crawl the web, and what a simple web crawler looks like? Google's BigTable - Andrew’s Website Today Jeff Dean gave a talk at the University of Washington about BigTable - their system for storing large amounts of data in a semi-structured manner. I was unable to find much info about BigTable on the internet, so I decided to take notes and write about it myself. First an overview. BigTable has been in development since early 2004 and has been in active use for about eight months (about February 2005). There are currently around 100 cells for services such as Print, Search History, Maps, and Orkut. Following Google's philosophy, BigTable was an in-house development designed to run on commodity hardware.

How to write a multi-threaded webcrawler in Java Table of Contents This page Here you can... ... learn how to write a multithreaded Java application... learn how to write a webcrawler... by the way learn how to write stuff that is object-oriented and reusable... or use the provided webcrawler more or less off-the-shelf. More or less in this case means that you have to be able to make minor adjustments to the Java source code yourself and compile it. You will need the Sun Java 2 SDK for this.

How to make a Web crawler using Java? There are a lot of useful information on the Internet. How can we automatically get those information? – Yes, Web Crawler. In this post, I will show you how to make a prototype of Web crawler step by step by using Java. Create intelligent Web spiders This article demonstrates how to create an intelligent Web spider based on standard Java network objects. The heart of this spider is a recursive routine that can perform depth-first Web searches based on keyword/phrase criteria and Webpage characteristics. Search progress displays graphically using a JTree structure. I address issues such as resolving relative URLs, avoiding reference loops, and monitoring memory/stack usage.

Implementing a Java Web Crawler. Implementing a Java web crawler is a fun and challenging task often given in university programming classes. You may also actually need a Java web crawler in your own applications from time to time. You can also learn a lot about Java networking and multi-threading while implementing a Java web crawler. This tutorial will go through the challenges and design decisions you face when implementing a Java web crawler. Clustering text documents using k-means This is an example showing how the scikit-learn can be used to cluster documents by topics using a bag-of-words approach. This example uses a scipy.sparse matrix to store the features instead of standard numpy arrays. Two feature extraction methods can be used in this example: TfidfVectorizer uses a in-memory vocabulary (a python dict) to map the most frequent words to features indices and hence compute a word occurrence frequency (sparse) matrix. The word frequencies are then reweighted using the Inverse Document Frequency (IDF) vector collected feature-wise over the corpus.HashingVectorizer hashes word occurrences to a fixed dimensional space, possibly with collisions.

Related:  db in cloud, db clustering, nosqlweb services[._.]Document OrientedNoSQLComputer knowledge