background preloader

Multicore/parallel

Facebook Twitter

Message Passing Interface (MPI) Table of Contents The Message Passing Interface Standard (MPI) is a message passing library standard based on the consensus of the MPI Forum, which has over 40 participating organizations, including vendors, researchers, software library developers, and users.

Message Passing Interface (MPI)

The goal of the Message Passing Interface is to establish a portable, efficient, and flexible standard for message passing that will be widely used for writing message passing programs. As such, MPI is the first standardized, vendor independent, message passing library. The advantages of developing message passing software using MPI closely match the design goals of portability, efficiency, and flexibility. MPI is not an IEEE or ISO standard, but has in fact, become the "industry standard" for writing message passing programs on HPC platforms.

The goal of this tutorial is to teach those unfamiliar with MPI how to develop and run parallel programs according to the MPI standard. An Interface Specification: Programming Model: Message Passing Interface. What is MPI?

Message Passing Interface

MPI is a library specification for message-passing, proposed as a standard by a broadly based committee of vendors, implementors, and users. How can I learn about MPI? Materials for learning MPI Papers discussing the design of MPI and its implementations Attend meetings on MPI: EuroMPI 2012 What Libraries and applications are available in MPI? A number of libraries and applications that use MPI are available. Where is MPI going? The MPI Forum has completed an effort to extend MPI. What tools related to MPI are available? A number of tools for an MPI environment exist. What papers have been published about MPI?

A list of papers that either discuss MPI or use MPI in applications is available. How does MPI compare to other message-passing systems? For technical computing, MPI has displaced most other message-passing systems. To Add Yourself to this Page Send mail to wgropp@illinois.edu or lusk@mcs.anl.gov. Go to ANL Mathematics and Computer Science home page. Common threads: POSIX threads explained, Part 2. Mutex me!

Common threads: POSIX threads explained, Part 2

In my previous article, I talked about threaded code that did unusual and unexpected things. Two threads each incremented a global variable twenty times. The variable was supposed to end up with a value of 40, but ended up with a value of 21 instead. What happened? The problem occurred because one thread repeatedly "cancelled out" the increment performed by the other thread. Thread3.c Back to top Comprehension time If you compare this code to the version in my previous article, you'll notice the addition of the calls pthread_mutex_lock() and pthread_mutex_unlock().

This is how mutexes work. Pthread_mutex_lock() and pthread_mutex_unlock() are normally used to protect data structures. For your enjoyment, four znurts re-enact a scene from recent pthread_mutex_lock() calls The thread in this image that has the mutex locked gets to access the complex data structure without worrying about having other threads mess with it at the same time. Why mutex at all? $ . Inside threads 1. Introduction to Parallel Computing. Table of Contents This tutorial is the first of eight tutorials in the 4+ day "Using LLNL's Supercomputers" workshop.

Introduction to Parallel Computing

It is intended to provide only a very quick overview of the extensive and broad topic of Parallel Computing, as a lead-in for the tutorials that follow it. As such, it covers just the very basics of parallel computing, and is intended for someone who is just becoming acquainted with the subject and who is planning to attend one or more of the other tutorials in this workshop. It is not intended to cover Parallel Programming in depth, as this would require significantly more time. The tutorial begins with a discussion on parallel computing - what it is and how it's used, followed by a discussion on concepts and terminology associated with parallel computing.

What is Parallel Computing? The Real World is Massively Parallel: Uses for Parallel Computing: Why Use Parallel Computing? Main Reasons: Who is Using Parallel Computing? Global Applications: Vendors: von Neumann Architecture. Multicore programming: it’s all in the messaging.