background preloader

With DRBD or pacemaker

Facebook Twitter

A common pairing

DRBD_Cookbook – cluster. This is a mini cookbook which is designed to help users get ​DRBD up and running in a 2-node cluster with GFS on top. Maintained by LonHohberger So, you want to try playing with GFS, but you do not have a SAN. GFS, as you know, is not a distributed file system. Rather, it is a shared disk cluster file system. This means that in order to use it on two or more computers, you must have a disk shared between them.... or, do you? DRBD is a RAID-1 style block device which synchronizes over the network between two computers. In essence, it provides a virtual shared storage between two computers. Ok, so, you want GFS-on-DRBD. Before you start ¶ This was written for RHEL5/CentOS5. Basic CMAN Configuration ¶ For DRBD to work in its simplest form on Linux-Cluster, you will need a valid, two node cluster configuration.

How to configure linux-cluster / CMAN is beyond the scope of this document. DRBD Configuration ¶ Ok, back to the configuring! Check the state on both nodes using the following commands. Debian and GFS2 managed by pacemaker | lapsz.eu. In this post I’ll describe efects of my fight to make Debian to host GFS2 file system via pacemaker. I can not find the complex, step by step howto to run GFS2 on Debian… so maybe this one will be helpful for someone. My goal was to create NAS system build with two nodes.

GFS2 is a first step to build redundant NFS/CIFS cluster. Storage is attached to both nodes via FC with MPIO. Both nodes are connected to two core switches via splitMLT and 802.3ad technology. For tests I have 50G disk connected via FC to nodes. Here are steps: Debian Wheezy 7.0.0 instalation and dist-upgrade to 7.1.0.Basic system configurationInstalation and configuration of multipath-tools.Instalation of needed packagesDisabling cman in init.dCorosync configurationCRM resources configurationCreating GFS2 filesystemAdding new GFS2 filesystem into CRM resource configuration So… “Let’s start a war” said Maggie one day… The following steps (with one exception) should be realized on both nodes.

Links: GFS2 in Pacemaker (Debian/Ubuntu) Setting up GFS2 in Pacemaker requires configuring the Pacemaker DLM, the Pacemaker GFS control daemon, and a GFS2 filesystem itself. Prerequisites GFS2 with Pacemaker integration is supported on Debian (squeeze-backports and up) and Ubuntu (10.04 LTS and up). You'll need the dlm-pcmk, gfs2-tools, and gfs-pcmk packages.Fencing is imperative. Get a proper fencing/STONITH configuration set up and test it thoroughly. Pacemaker configuration The Pacemaker configuration, shown here in crm shell syntax, normally puts all the required resources into one cloned group. Then when that's done, your filesystem should happily mount on all nodes.

DRBD8 + GFS2 on debian etch | HOWTO's and Tutorials. This article should be seen as a continuation of the post entitled DRBD8 with two primaries on debian etch. We noticed that DRBD8 with two primaries could not only ensure filesystem integrity but also does not work as expected under Ext3 filesystem. It focuses only on synchronization. As shown in following figure, read accesses take place only locally as write accesses are sent both locally and remotely to the other node so that at any time the global filesystem is consistent on both nodes. This post shows how to ensure data protection when two master nodes are synchronized with DRBD8.

The key point is the filesystem. A lock mechanism must be established to ensure protection. Configuration files System requirements 2.6.24 kernel at least [tux]# apt-get update [tux]# apt-get install linux-image-2.6.24-etchnhalf.1-686 [tux]# apt-get install linux-headers-2.6.24-etchnhalf.1-686 [tux]# reboot Install dpkg-dev and other dependencies to get and build deb packages from sources: [tux]# cd ..

Building a redundant mailstore with DRBD and GFS - iomem. I've recently been asked to build a redundant mailstore, using two server-class machines that are running Ubuntu. The caveat, however, is that no additional hardware will be purchased, so this rules out using any external filestorage, such as a SAN. I've been investigating the use of DRBD in a primary/primary configuration, to mirror a block device between the two servers, and then put GFS2 over the top of it, so that the filesystem can be mounted on both servers at once.

While a set-up like this is more complex and fragile than using ext4 and DRBD in primary/secondary mode and clustering scripts to ensure that the filesystem is only ever mounted on one server at a time, it's likely that there will be a requirement for GFS on the same two servers for another purpose, in the near future, so it makes sense to use the same method of clustering for both. The following guide details how to get this going on Ubuntu 10.04 LTS (lucid). Firstly, install DRBD: lvcreate -L 60G -n mailmirror vg01. Chapter 11. Using GFS2 with DRBD. This chapter outlines the steps necessary to set up a DRBD resource as a block device holding a shared Global File System (GFS) version 2 in a nutshell. The Red Hat Global File System (GFS) is Red Hat’s implementation of a concurrent-access shared storage file system.

As any such filesystem, GFS allows multiple nodes to access the same storage device, in read/write fashion, simultaneously without risking data corruption. It does so by using a Distributed Lock Manager (DLM) which manages concurrent access from cluster members. GFS was designed, from the outset, for use with conventional shared storage devices. Regardless, it is perfectly possible to use DRBD, in dual-primary mode, as a replicated storage device for GFS.

Applications may benefit from reduced read/write latency due to the fact that DRBD normally reads from and writes to local storage, as opposed to the SAN devices GFS is normally configured to run from.