XFS. Not to be confused with Xiafs.
XFS is a high-performance 64-bit journaling file system created by Silicon Graphics, Inc (SGI) in 1993. It was the default file system in the SGI's IRIX operating system starting with its version 5.3; the file system was ported to the Linux kernel in 2001. As of June 2014[update], XFS is supported by most Linux distributions, some of which use it as the default file system.
XFS excels in the execution of parallel input/output (I/O) operations due to its design, which is based on allocation groups (a type of subdivision of the physical volumes in which XFS is used- also shortened to AGs). Because of this, XFS enables extreme scalability of I/O threads, file system bandwidth, and size of files and of the file system itself when spanning multiple physical storage devices. History Silicon Graphics began development of XFS in 1993, including it into IRIX for the first time in its version 5.3 in 1994. Features Capacity Journaling DMAPI HDFS and Erasure Codes (HDFS-RAID) The Hadoop Distributed File System has been great in providing a cloud-type file system.
It is robust (when administered correctly :-)) and highly scalable. However, one of the main drawbacks of HDFS is that each piece of data is replicated in three places. This is acceptable because disk storage is cheap and is becoming cheaper by the day; this isn't a problem if you have a relatively small to medium size cluster. The price difference (in absolute terms) is not much whether you use 15 disks or whether you use 10 disks. If we consider the cost of $1 per GByte, the price difference between fifteen 1 TB disk and ten 1 TB disk is only $5K. The reason HDFS stores disk blocks in triplicate is because it uses commodity hardware and there is non-negligible probability of a disk failure.
I heard about this idea called DiskReduce from the folks at CMU. Distributed Raid File System consists of two main software components. HDFS slower than expected reading from localnode. Attaching v1 of a design document for this feature.
This does not include a test plan - that will follow once implementation has gone a bit further. Pasting the design doc below as well: Problem Definition Currently, when the DFS Client is located on the same physical node as the DataNode serving the data, it does not use this knowledge to its advantage. All blocks are read through the same protocol based on a TCP connection. This JIRA seeks to improve the performance of node-local reads by providing a fast path that is enabled in this case. Although writes are likely to see an improvement here too, this JIRA will focus only on the read path.
Use Cases As mentioned above, the majority of data read during a MapReduce job tends to be from local datanodes. Users will not have to make any specific changes to use the performance improvement - the optimization should be transparent and retain all existing semantics. Interaction with Current System This behavior needs modifications in two areas: