background preloader

Global File System (GFS, GFS2)

Facebook Twitter

A distributed filesystem created by a redhat project .. with impressive features ... very little bootstrapping overheads, or extra metadata requirements ..

Unfortunately, Redhat is up to their usual spoiler games RE debian support.

There is some, but it is somewhat unstable, albeit usable ..

Will play with this in both RHEL (oVirt), and debian VMs .. need to be able to use this in stable production grade system soonish ..

Seems better and more mature than OCFS2.

Sources

Proxmox-ve support. With DRBD or pacemaker. GFS2. GFS and GFS2 are free software, distributed under the terms of the GNU General Public License.[1][2] History[edit] Development of GFS began in 1995 and was originally developed by University of Minnesota professor Matthew O'Keefe and a group of students.[3] It was originally written for SGI's IRIX operating system, but in 1998 it was ported to Linux since the open source code provided a more convenient development platform.

In late 1999/early 2000 it made its way to Sistina Software, where it lived for a time as an open-source project. In 2001, Sistina made the choice to make GFS a proprietary product. Developers forked OpenGFS from the last public release of GFS and then further enhanced it to include updates allowing it to work with OpenDLM. But OpenGFS and OpenDLM became defunct, since Red Hat purchased Sistina in December 2003 and released GFS and many cluster-infrastructure pieces under the GPL in late June 2004.

Hardware[edit] Differences from a local filesystem[edit] Journaling[edit] GFS2 Implementation Under RHEL - Toki Winter. This article will demonstrate setting up a simple RHCS (Red Hat Cluster Suite) two-node cluster, with an end goal of having a 50GB LUN shared between two servers, thus providing clustered shared storage to both nodes. This will enable applications running on the nodes to write to a shared filesystem, perform correct locking, and ensure filesystem integrity.

This type of configuration is central to many active-active application setups, where both nodes share a central content or configuration repository. For this article, two RHEL 6.1 nodes, running on physical hardware (IBM blades) were used. Each node has multiple paths back to the 50GB SAN LUN presented, and multipathd will be used to manage path failover and rebuild in the event of interruption.

Validating Hardware Prior to building our cluster, it is imperative that the appropriate kernel module(s) have been loaded. . # lsmod | grep ql qla2xxx 365773 0 scsi_transport_fc 52002 1 qla2xxx Multipath Configuration # service multipathd start. Chapter 3. Managing GFS2. This chapter describes the tasks and commands for managing GFS2 and consists of the following sections: You create a GFS2 file system with the mkfs.gfs2 command. You can also use the mkfs command with the -t gfs2 option specified.

A file system is created on an activated LVM volume. The following information is required to run the mkfs.gfs2 command: Lock protocol/module name (the lock protocol for a cluster is lock_dlm) Cluster name (when running as part of a cluster configuration) Number of journals (one journal required for each node that may be mounting the file system) When creating a GFS2 file system, you can use the mkfs.gfs2 command directly, or you can use the mkfs command with the -t parameter specifying a file system of type gfs2, followed by the gfs2 file system options.

Once you have created a GFS2 file system with the mkfs.gfs2 command, you cannot decrease the size of the file system. When creating a clustered GFS2 file system, you can use either of the following formats: GFS Project Page. Introduction GFS (Global File System) is a cluster file system. It allows a cluster of computers to simultaneously use a block device that is shared between them (with FC, iSCSI, NBD, etc...).

GFS reads and writes to the block device like a local filesystem, but also uses a lock module to allow the computers coordinate their I/O so filesystem consistency is maintained. One of the nifty features of GFS is perfect consistency -- changes made to the filesystem on one machine show up immediately on all other machines in the cluster. GFS consists of a set of kernel patches and userspace programs. The GFS lock module lock_dlm depends on CMAN and DLM. A new version of GFS, GFS 2, is under heavy development and is located in the gfs2 & gfs2-kernel cvs directories.

Mailing lists linux-cluster is the mailing list for cluster-related questions and discussion. Whenever the development source code repository is updated, email is sent to the cluster-cvs mailing list. Source code Documentation. Gfs2. Global File System ------------------ GFS is a cluster file system. It allows a cluster of computers to simultaneously use a block device that is shared between them (with FC, iSCSI, NBD, etc). GFS reads and writes to the block device like a local file system, but also uses a lock module to allow the computers coordinate their I/O so file system consistency is maintained.

One of the nifty features of GFS is perfect consistency -- changes made to the file system on one machine show up immediately on all other machines in the cluster. GFS uses interchangeable inter-node locking mechanisms, the currently supported mechanisms are: lock_nolock -- allows gfs to be used as a local file system lock_dlm -- uses a distributed lock manager (dlm) for inter-node locking The dlm is found at linux/fs/dlm/ Lock_dlm depends on user space cluster management systems found at the URL above. CategoryHowTo – cluster. HomePage – cluster. FAQ – cluster. This FAQ answers questions about the cluster project as a whole, broken into its components.

Some of the answers cross multiple components, so if you don't see the question you're looking for, search the wiki and you may find the question and answer under a different component. In many cases, there are multiple answers to every question. For example, there's the answer that a developer wants to hear, and there's the answer that a lay person wants to hear without the technical jibberish.

In most cases I've tried to provide answers suitable to lay people and people who are relatively new to clustering technology. If you have corrections or additional questions not addressed here, please mail them to to linux-cluster@… <<BR>> (Do not paste questions into the Wiki and expect a timely response!) This FAQ is divided into sections: Recent Changes: FAQ maintained by BobPeterson? GFS FAQ. Usage. GFS Project Page.