background preloader


Facebook Twitter

Accelerating Large Graph Algorithms on the GPU Using CUDA. Grafight/ at master · ashwinraghav/Grafight. ScholarBank@NUS: Terms of Use. ScholarBank@NUS is available to both users seeking to deposit their works online and users who would like to access and reference the works of others.

ScholarBank@NUS: Terms of Use

Users who deposit a work into ScholarBank@NUS must be the original author of that work and also ensure that the work does not infringe the rights, including copyright, of any third party. If third party material is incorporated into such work, the user must ensure that permission from the owner of such third party material has been obtained for its incorporation and for the submission and use of the same for purposes of ScholarBank@NUS. Users who deposit their work into ScholarBank@NUS retain the copyright to their work but are required to grant the University a royalty-free, irrevocable, perpetual and non-exclusive licence to use and reproduce the work. 16-899D: Big Data Approaches. CS267 Spring 2007: Applications of Parallel Computing.


12 PageRank in MapReduce and Pregel. CS267 Spring 2007: Applications of Parallel Computing. Apache Spark Quick Guide. Industries are using Hadoop extensively to analyze their data sets.

Apache Spark Quick Guide

The reason is that Hadoop framework is based on a simple programming model (MapReduce) and it enables a computing solution that is scalable, flexible, fault-tolerant and cost effective. Here, the main concern is to maintain speed in processing large datasets in terms of waiting time between queries and waiting time to run the program. Spark was introduced by Apache Software Foundation for speeding up the Hadoop computational computing software process. As against a common belief, Spark is not a modified version of Hadoop and is not, really, dependent on Hadoop because it has its own cluster management. Hadoop is just one of the ways to implement Spark. Spark uses Hadoop in two ways – one is storage and second is processing.

Apache Spark Apache Spark is a lightning-fast cluster computing technology, designed for fast computation. Slides.


Landmark Algorithm Breaks 30-Year Impasse. A theoretical computer scientist has presented an algorithm that is being hailed as a breakthrough in mapping the obscure terrain of complexity theory, which explores how hard computational problems are to solve.

Landmark Algorithm Breaks 30-Year Impasse

Last month, László Babai, of the University of Chicago, announced that he had come up with a new algorithm for the “graph isomorphism” problem, one of the most tantalizing mysteries in computer science. The new algorithm appears to be vastly more efficient than the previous best algorithm, which had held the record for more than 30 years. His paper became available today on the scientific preprint site, and he has also submitted it to the Association for Computing Machinery’s 48th Symposium on Theory of Computing. For decades, the graph isomorphism problem has held a special status within complexity theory.

While thousands of other computational problems have meekly succumbed to categorization as either hard or easy, graph isomorphism has defied classification. Making Spark Work for Next Generation Workflows. Introduction You know that you are dealing with “Big” data when you can no longer use general-purpose, off-the-shelf solutions for your problems.

Making Spark Work for Next Generation Workflows

DBMS Musings: Hadoop's tremendous inefficiency on graph data management (and how to avoid it) Hadoop is great.

DBMS Musings: Hadoop's tremendous inefficiency on graph data management (and how to avoid it)

It seems clear that it will serve as the basis of the vast majority of analytical data management within five years. Already today it is extremely popular for unstructured and polystructured data analysis and processing, since it is hard to find other options that are superior from a price/performance perspective. The reader should not take the following as me blasting Hadoop. I believe that Hadoop (with its ecosystem) is going to take over the world. Graph Processing versus Graph Databases. There’s recently been a great deal of discussion on the subject of graph processing.

Graph Processing versus Graph Databases

For those of us in the graph database space, this is an exciting development since it reinforces the utility of graphs as both a storage and a computational model. Confusingly however, processing graph-like data is often mistakenly conflated with graph databases because they share the same data model, yet each tool addresses a fundamentally different problem. For example, graph processing platforms like Google’s Pregel achieve high aggregate computational throughput by adopting the Bulk Synchronous Processing (BSP) model from the parallel computing community. Pregel supports large-scale graph processing by partitioning a graph across many machines and allowing those machines to efficiently compute at vertices using localised data. Only during synchronisation phases is localised information exchanged (c.f. the BSP model).

Data-graph computation

GraphDB. Mcollina/levelgraph. Example Dataset. Graph Engine. Dato Core Open Source. SFrame™, the fast, scalable engine of GraphLab Create™ is now open source.

Dato Core Open Source

The SFrame project provides the complete implementation of the following: SFrame SArray SGraph The C++ SDK surface area (gl_sframe, gl_sarray, gl_sgraph) Support for strictly typed columns (int, float, str, datetime), weakly typed columns (schema free lists, dictionaries) as well as specialized types such as Image. Uniform support for missing data. Query optimization and Lazy evaluation. A Python API (SArray, SFrame, SGraph) with an indirect access via an interprocess layer. SFrame is available under a BSD license.