background preloader

Program

Facebook Twitter

Details of package esys-particle in wheezy. Webinars.

Python

Parallel Programming Crash Course. I've been covering various scientific programs the past few months, but sometimes it's hard to find a package that does what you need.

Parallel Programming Crash Course

In those cases, you need to go ahead and write your own code. When you are involved with heavy-duty scientific computing, you usually need to go to parallel computing in order to get the runtimes down to something reasonable. This month, I give a crash course in parallel programming so you can get a feel for what is involved. There are two broad categories of parallel programs: shared memory and message passing. You likely will see both types being used in various scientific arenas. Let's start by looking at message-passing parallel programming. An MPI program consists of multiple processes (called slots), running on one or more machines. Assuming you already have some MPI code, the first step in using it is to compile it.

Once your code is compiled, you need to run it. Int MPI_Init(&argc, &argv); int MPI_Finalize(); Advanced OpenMP. Because the August issue's theme is programming, I thought I should cover some of the more-advanced features available in OpenMP.

Advanced OpenMP

Several issues ago, I looked at the basics of using OpenMP, so you may want go back and review that article. In scientific programming, the basics tend to be the limit of how people use OpenMP, but there is so much more available—and, these other features are useful for so much more than just scientific computing. So, in this article, I delve into other by-waters that never seem to be covered when looking at OpenMP programming. Who knows, you may even replace POSIX threads with OpenMP. First, let me quickly review a little bit of the basics of OpenMP. The most typical construct is to use a for loop. #pragma omp parallel for for (i=0; i<max; i++) { a[i] = sin(i); } Then you would compile this with GCC by using the -fopenmp flag. First, let's look at sections. #pragma omp parallel sections { ...commands... } export OMP_NUM_THREADS=4.

Big-Box Science. A few months ago, I wrote a piece about how you can use MPI to run a parallel program over a number of machines that are networked together.

Big-Box Science

But more and more often, your plain-old desktop has more than one CPU. How best can you take advantage of the amount of power at your fingertips? When you run a parallel program on one single machine, it is called shared-memory parallel programming. Several options are available when doing shared-memory programming. The most common are pthreads and openMP. OpenMP is a specification, which means you end up actually using an implementation.

The most basic concept in openMP is that only sections of your code are run in parallel, and for the most part, these sections all run the same code. #pragma omp parallel in C/C++, or: Tinker with Molecular Dynamics for Fun and Profit. Molecular dynamics computations make up a very large proportion of the computer cycles being used in science today.

Tinker with Molecular Dynamics for Fun and Profit

For those of you who remember chemistry and or thermodynamics, you should recall that all of the calculations you made were based on treating the material in question as a homogeneous mass where each part of the mass simply has the average value of the relevant properties. Under average conditions, this tends be adequate most times. But, more and more scientists were running into conditions that would be on the fringes of where they could apply those types of generalizations. Enter molecular dynamics, or MD. With MD, you have to move down almost to the lowest level of matter that we know of, the level of atoms and molecules. Unlike most of the software I've covered in this space, TINKER isn't available in the package systems of most distributions.

Once it is unpacked, change directory to the tinker subdirectory. OpenFOAM® - The Open Source Computational Fluid Dynamics (CFD) Toolbox. Elmer. Obtaining NumPy & SciPy — SciPy.org. Official Source and Binary Releases For each official release of NumPy and SciPy, we provide source code (tarball) as well as binary packages for several major platforms.

Obtaining NumPy & SciPy — SciPy.org

Binary packages for other platforms may be available from your operating system vendor. Build instructions are available for Linux, Windows and Mac OSX. Bleeding Edge Repository Access The most recent development versions of NumPy and SciPy are available through the official repositories hosted on Github. To check out the latest NumPy sources: git clone numpy or (if you’re behind a proxy blocking git ports) git clone numpy To check out the latest SciPy sources: git clone scipy or git clone scipy Build instructions.

GSL - GNU Scientific Library. Introduction The GNU Scientific Library (GSL) is a numerical library for C and C++ programmers.

GSL - GNU Scientific Library

It is free software under the GNU General Public License. The library provides a wide range of mathematical routines such as random number generators, special functions and least-squares fitting. There are over 1000 functions in total with an extensive test suite. The complete range of subject areas covered by the library includes,