background preloader

Gpu

Facebook Twitter

Build a Cluster Computing Environment in Under 10 minutes. We've created a new video tutorial, which describes how to setup a cluster of high performance compute nodes in under 10 minutes. Follow along with the tutorial to get a feel for how to provision high performance systems with Amazon EC2 - we'll even cover the cost of the resources you use, through a $20 free service credit. Why HPC? Data is at the heart of many modern businesses. The tools and products that we create in turn generate complex datasets which are increasing in size, scope and importance. Whether we are looking for meaning within the bases of our genomes, performing risk assesments on the markets or reporting on click-through traffic from our websites, these data hold valuable information which can drive the state of the art forward. Constraints are everywhere when dealing with data and its associated analysis, but few are as restrictive as the time and effort it takes to procure, provision and maintain the high performance compute servers which drive that analysis.

Pystream - Project Hosting on Google Code. Python Snakes Its Way Into HPC. New Amazon EC2 Instance Type - The Cluster Compute Instance. A number of AWS users have been using Amazon EC2 to solve a variety of computationally intensive problems. Here's a sampling: Atbrox and Lingit use Elastic MapReduce to build data sets that help individuals with dyslexia to improve their reading and writing skills.Systems integrator Cycle Computing helps Varian to run compute-intensive Monte Carlo simulations.Harvard Medical School's Laboratory for Personalized Medicine creates innovative genetic testing models.Pathwork Diagnostics runs tens of thousands of models to help oncologists to diagnose hard-to-identify cancer tumors.Razorfish processes huge datasets on a very compressed timescale.The Server Labs helps the European Space Agency to build the operations infrastructure for the Gaia project.

Some of these problems are examples of what are called “embarrassingly parallel” computing. Others leverage the Hadoop framework for data-intensive computing, spreading the workload across a large number of EC2 instances. -- Jeff;