background preloader

Clusterfs

Facebook Twitter

Lsyncd - Lsyncd (Live Syncing Daemon) synchronizes local directories with a remote targets. Description Lsyncd watches a local directory trees event monitor interface (inotify or fsevents).

lsyncd - Lsyncd (Live Syncing Daemon) synchronizes local directories with a remote targets

It aggregates and combines events for a few seconds and then spawns one (or more) process(es) to synchronize the changes. By default this is rsync. Lsyncd is thus a light-weight live mirror solution that is comparatively easy to install not requiring new filesystems or blockdevices and does not hamper local filesystem performance. Rsync+ssh is an advanced action configuration that uses a SSH to act file and directory moves directly on the target instead of retransmitting the move destination over the wire.

Fine-grained customizaton can be achieved through the config file. License: GPLv2 or any later GPL version. When to use Lsyncd is designed to synchronize a local directory tree with low profile of expected changes to a remote mirror. Support If you are happy with Lsyncd, throw me a line. Other synchronization tools: DRBD operates on block device level. Lsyncd usage examples Disclaimer. SparkleShare - Sharing work made easy. DRBD:What is DRBD.

Why DRBD won’t let you mount the Secondary « Florian's blog. As I’m sure you’re aware, DRBD disallows access (any access, including read-only) to a DRBD device in Secondary mode.

Why DRBD won’t let you mount the Secondary « Florian's blog

This always raises questions like the one I’ve taken the liberty to quote here. It came up in a MySQL webinar on replication and HA: Because of the asynchronous nature of [MySQL] replication we end up with a dilemma when looking at using slaves as read nodes in that the only time we go to the database for information is to build a local cache file, and that local cache file is ONLY removed when information related to that cache file changes, it is NOT based on time. If we had a synchronous method of replication we would then know the cache files were always getting the right information, but because of the asynchronous nature we are prone to old data that never gets invalidated.One thought I had was to see if using the “backup” DRBD node as a read only type filesystem might accomplish this. Let’s get into this briefly. Btrfs. Btrfs (B-tree file system, variously pronounced: "Butter F S", "Butterface",[7] "Better F S",[5] "B-tree F S",[8] or simply by spelling it out) is a GPL-licensed copy-on-write file system for Linux.

Btrfs

Development began at Oracle Corporation in 2007. As of August 2014[update], the file system's on-disk format has been marked as stable.[9] History[edit] The core data structure of Btrfs—​the copy-on-write B-tree—​was originally proposed by IBM researcher Ohad Rodeh at a presentation at USENIX 2007. Chris Mason, an engineer working on ReiserFS for SUSE at the time, joined Oracle later that year and began work on a new file system based on these B-trees.[11] In 2008, the principal developer of the ext3 and ext4 file systems, Theodore Ts'o, stated that although ext4 has improved features, it is not a major advance; it uses old technology and is a stop-gap. Features[edit] As of version 3.14 of the Linux kernel, Btrfs implements the following features:[23][24] List of file systems.

GlusterFS. GlusterFS is a scale-out network-attached storage file system.

GlusterFS

It has found applications including cloud computing, streaming media services, and content delivery networks. GlusterFS was developed originally by Gluster, Inc., then by Red Hat, Inc., after their purchase of Gluster in 2011. In June 2012, Red Hat Storage Server was announced as a commercially-supported integration of GlusterFS with Red Hat Enterprise Linux.[3] Design[edit] GlusterFS aggregates various storage servers over Ethernet or Infiniband RDMA interconnect into one large parallel network file system. GlusterFS has a client and server component. Most of the functionality of GlusterFS is implemented as translators, including: The GlusterFS server is intentionally kept simple: it exports an existing directory as-is, leaving it up to client-side translators to structure the store.

Представлен переработанный вариант распределённой ФС POHMELFS (Страница 1) — Новости — LinuxForum — Форум о Linux. Спустя три года с момента первого релиза сетевой распределённой файловой системы POHMELFS в списке рассылки разработчиков ядра Linux представлен полностью переработанный вариант данной ФС, в котором реализовано большинство из всех запланированных ранее возможностей.

Представлен переработанный вариант распределённой ФС POHMELFS (Страница 1) — Новости — LinuxForum — Форум о Linux

Код проекта распространяется под лицензией GPLv2. Новая реализация POHMELFS базируется на распределённом хранилище Elliptics, представляющем собой распределённую хэш таблицу. Изначально Elliptics развивался как часть POHMELFS, но два года назад был выделен в отдельный проект, который успешно используется в промышленной эксплуатации. Например, Elliptics используется для организации хранения около петабайта контента в сервисах компании Yandex (карты, фотографии, музыка). Хранилище рассчитано на организацию надёжного хранения большого объёма данных в формате ключ/значение с резервированием информации за счёт дублирования данных на разных узлах сети (ситуация выхода узла из строя обрабатывается автоматически).

Main Page. FAQ – tahoe-lafs. Q0: ¶ What is Tahoe-LAFS?

FAQ – tahoe-lafs

What can you do with it? A: Think of Tahoe-LAFS as being like ​BitTorrent, except you can upload as well as download. Распределенные и кластерные ФС.